Test Report: Docker_Windows 22094

                    
                      4d318e45b0dac190a241a23c5ddc63ef7c67bab3:2025-12-10:42711
                    
                

Test fail (35/427)

Order failed test Duration
29 TestDownloadOnlyKic 4.28
67 TestErrorSpam/setup 59.97
169 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 532.76
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 373.92
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 53.55
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 54.26
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 3.13
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 740.76
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 54.43
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 20.21
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 5.31
199 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 122.37
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 243.4
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 22.52
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 54.36
216 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 0.1
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 0.49
218 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 0.49
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.5
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.5
221 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.48
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv/powershell 2.83
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.55
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup 20.18
360 TestKubernetesUpgrade 833.96
418 TestStartStop/group/no-preload/serial/FirstStart 543.59
430 TestStartStop/group/newest-cni/serial/FirstStart 543.9
464 TestStartStop/group/no-preload/serial/DeployApp 5.34
468 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 98.71
473 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 106.46
479 TestStartStop/group/no-preload/serial/SecondStart 378.71
489 TestStartStop/group/newest-cni/serial/SecondStart 382.22
507 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.72
511 TestStartStop/group/newest-cni/serial/Pause 12.33
512 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 232
x
+
TestDownloadOnlyKic (4.28s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-221900 --alsologtostderr --driver=docker
aaa_download_only_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-221900 --alsologtostderr --driver=docker: (3.7191219s)
aaa_download_only_test.go:239: expected tarball file "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\preloaded-tarball\\preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4" to exist, but got error: GetFileAttributesEx C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4: The system cannot find the file specified.
helpers_test.go:176: Cleaning up "download-docker-221900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-221900
--- FAIL: TestDownloadOnlyKic (4.28s)

                                                
                                    
x
+
TestErrorSpam/setup (59.97s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-259400 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-259400 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 --driver=docker: (59.9740591s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube container"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-259400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
- KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
- MINIKUBE_LOCATION=22094
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "nospam-259400" primary control-plane node in "nospam-259400" cluster
* Pulling base image v0.0.48-1765275396-22083 ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-259400" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube container
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (59.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (532.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-871500 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-rc.1
E1210 05:49:45.889276   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:53:02.261343   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:53:02.268030   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:53:02.280048   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:53:02.301517   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:53:02.343462   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:53:02.425049   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:53:02.586923   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:53:02.908621   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:53:03.550781   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:53:04.832538   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:53:07.393842   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:53:12.516251   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:53:22.758143   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:53:43.240361   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:54:24.202555   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:54:45.892374   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:55:46.124830   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:56:08.972674   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-871500 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m50.1578157s)

                                                
                                                
-- stdout --
	* [functional-871500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "functional-871500" primary control-plane node in "functional-871500" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Found network options:
	  - HTTP_PROXY=localhost:50076
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	  - HTTP_PROXY=localhost:50076
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50076 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50076 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50076 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50076 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-871500 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-871500 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000734582s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000709985s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000709985s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-windows-amd64.exe start -p functional-871500 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-871500
helpers_test.go:244: (dbg) docker inspect functional-871500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8",
	        "Created": "2025-12-10T05:48:42.330122465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43983,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:48:42.614396787Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hosts",
	        "LogPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8-json.log",
	        "Name": "/functional-871500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-871500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-871500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-871500",
	                "Source": "/var/lib/docker/volumes/functional-871500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-871500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-871500",
	                "name.minikube.sigs.k8s.io": "functional-871500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f831997cbc8b0c45b2bf0420afea78f7ce282bcb79cb347c18a4cbe1cbe8de10",
	            "SandboxKey": "/var/run/docker/netns/f831997cbc8b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50082"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50083"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50085"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-871500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e1a843817431186dff999677a77b90e1cb216c0695cd37f7ec217b50cc77b815",
	                    "EndpointID": "c10c5ce0711796e291e0a729879c86799e1ac37c35b14f1d57f23910383e1c22",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-871500",
	                        "47edd36e1aff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500: exit status 6 (603.1585ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 05:57:14.841176   13452 status.go:458] kubeconfig endpoint: get endpoint: "functional-871500" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 logs -n 25: (1.0790046s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-493600 image load --daemon kicbase/echo-server:functional-493600 --alsologtostderr                                                             │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ docker-env     │ functional-493600 docker-env                                                                                                                              │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls                                                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image save kicbase/echo-server:functional-493600 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image rm kicbase/echo-server:functional-493600 --alsologtostderr                                                                        │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls                                                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ service        │ functional-493600 service hello-node --url --format={{.IP}}                                                                                               │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ image          │ functional-493600 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ docker-env     │ functional-493600 docker-env                                                                                                                              │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls                                                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ ssh            │ functional-493600 ssh sudo cat /etc/test/nested/copy/11304/hosts                                                                                          │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image save --daemon kicbase/echo-server:functional-493600 --alsologtostderr                                                             │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ update-context │ functional-493600 update-context --alsologtostderr -v=2                                                                                                   │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ update-context │ functional-493600 update-context --alsologtostderr -v=2                                                                                                   │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ update-context │ functional-493600 update-context --alsologtostderr -v=2                                                                                                   │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls --format short --alsologtostderr                                                                                               │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls --format yaml --alsologtostderr                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ ssh            │ functional-493600 ssh pgrep buildkitd                                                                                                                     │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ image          │ functional-493600 image build -t localhost/my-image:functional-493600 testdata\build --alsologtostderr                                                    │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls                                                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls --format json --alsologtostderr                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ service        │ functional-493600 service hello-node --url                                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ image          │ functional-493600 image ls --format table --alsologtostderr                                                                                               │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ delete         │ -p functional-493600                                                                                                                                      │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:48 UTC │ 10 Dec 25 05:48 UTC │
	│ start          │ -p functional-871500 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-rc.1                                     │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:48 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:48:24
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:48:24.107386    3764 out.go:360] Setting OutFile to fd 2044 ...
	I1210 05:48:24.150696    3764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:48:24.150696    3764 out.go:374] Setting ErrFile to fd 1800...
	I1210 05:48:24.150696    3764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:48:24.164698    3764 out.go:368] Setting JSON to false
	I1210 05:48:24.167584    3764 start.go:133] hostinfo: {"hostname":"minikube4","uptime":4636,"bootTime":1765341068,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 05:48:24.167655    3764 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 05:48:24.172612    3764 out.go:179] * [functional-871500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 05:48:24.175917    3764 notify.go:221] Checking for updates...
	I1210 05:48:24.177926    3764 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:48:24.181283    3764 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:48:24.183286    3764 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 05:48:24.185183    3764 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:48:24.188353    3764 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:48:24.190656    3764 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:48:24.307263    3764 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 05:48:24.310641    3764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:48:24.544994    3764 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:80 SystemTime:2025-12-10 05:48:24.525961682 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:48:24.547586    3764 out.go:179] * Using the docker driver based on user configuration
	I1210 05:48:24.551582    3764 start.go:309] selected driver: docker
	I1210 05:48:24.551582    3764 start.go:927] validating driver "docker" against <nil>
	I1210 05:48:24.551582    3764 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:48:24.638260    3764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:48:24.873619    3764 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:80 SystemTime:2025-12-10 05:48:24.85282853 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:48:24.873619    3764 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:48:24.874191    3764 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 05:48:24.877253    3764 out.go:179] * Using Docker Desktop driver with root privileges
	I1210 05:48:24.879584    3764 cni.go:84] Creating CNI manager for ""
	I1210 05:48:24.879686    3764 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 05:48:24.879721    3764 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	W1210 05:48:24.879853    3764 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:50076 to docker env.
	W1210 05:48:24.879950    3764 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:50076 to docker env.
	I1210 05:48:24.880155    3764 start.go:353] cluster config:
	{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:48:24.881602    3764 out.go:179] * Starting "functional-871500" primary control-plane node in "functional-871500" cluster
	I1210 05:48:24.885876    3764 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 05:48:24.890169    3764 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:48:24.894769    3764 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 05:48:24.895014    3764 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1210 05:48:24.895014    3764 cache.go:65] Caching tarball of preloaded images
	I1210 05:48:24.895108    3764 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:48:24.895373    3764 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1210 05:48:24.895420    3764 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1210 05:48:24.895420    3764 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\config.json ...
	I1210 05:48:24.896093    3764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\config.json: {Name:mkfc18ae291f1fc39f496b94d7b18be722be7ec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:48:24.969848    3764 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 05:48:24.969848    3764 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 05:48:24.969848    3764 cache.go:243] Successfully downloaded all kic artifacts
	I1210 05:48:24.969848    3764 start.go:360] acquireMachinesLock for functional-871500: {Name:mkaa7072cf669cebcb93feb3e66bf80897472d33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:48:24.969848    3764 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-871500"
	I1210 05:48:24.969848    3764 start.go:93] Provisioning new machine with config: &{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 05:48:24.969848    3764 start.go:125] createHost starting for "" (driver="docker")
	I1210 05:48:24.975191    3764 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1210 05:48:24.975336    3764 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:50076 to docker env.
	W1210 05:48:24.975336    3764 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:50076 to docker env.
	I1210 05:48:24.975336    3764 start.go:159] libmachine.API.Create for "functional-871500" (driver="docker")
	I1210 05:48:24.975336    3764 client.go:173] LocalClient.Create starting
	I1210 05:48:24.975336    3764 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1210 05:48:24.975336    3764 main.go:143] libmachine: Decoding PEM data...
	I1210 05:48:24.975336    3764 main.go:143] libmachine: Parsing certificate...
	I1210 05:48:24.976246    3764 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1210 05:48:24.976412    3764 main.go:143] libmachine: Decoding PEM data...
	I1210 05:48:24.976412    3764 main.go:143] libmachine: Parsing certificate...
	I1210 05:48:24.980728    3764 cli_runner.go:164] Run: docker network inspect functional-871500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 05:48:25.032147    3764 cli_runner.go:211] docker network inspect functional-871500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 05:48:25.036304    3764 network_create.go:284] running [docker network inspect functional-871500] to gather additional debugging logs...
	I1210 05:48:25.036304    3764 cli_runner.go:164] Run: docker network inspect functional-871500
	W1210 05:48:25.087784    3764 cli_runner.go:211] docker network inspect functional-871500 returned with exit code 1
	I1210 05:48:25.087784    3764 network_create.go:287] error running [docker network inspect functional-871500]: docker network inspect functional-871500: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-871500 not found
	I1210 05:48:25.087784    3764 network_create.go:289] output of [docker network inspect functional-871500]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-871500 not found
	
	** /stderr **
	I1210 05:48:25.092419    3764 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 05:48:25.157240    3764 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001895170}
	I1210 05:48:25.157240    3764 network_create.go:124] attempt to create docker network functional-871500 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1210 05:48:25.161278    3764 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-871500 functional-871500
	I1210 05:48:25.307195    3764 network_create.go:108] docker network functional-871500 192.168.49.0/24 created
	I1210 05:48:25.307195    3764 kic.go:121] calculated static IP "192.168.49.2" for the "functional-871500" container
	I1210 05:48:25.316982    3764 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 05:48:25.373294    3764 cli_runner.go:164] Run: docker volume create functional-871500 --label name.minikube.sigs.k8s.io=functional-871500 --label created_by.minikube.sigs.k8s.io=true
	I1210 05:48:25.430993    3764 oci.go:103] Successfully created a docker volume functional-871500
	I1210 05:48:25.434641    3764 cli_runner.go:164] Run: docker run --rm --name functional-871500-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-871500 --entrypoint /usr/bin/test -v functional-871500:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 05:48:26.764147    3764 cli_runner.go:217] Completed: docker run --rm --name functional-871500-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-871500 --entrypoint /usr/bin/test -v functional-871500:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.3294912s)
	I1210 05:48:26.764147    3764 oci.go:107] Successfully prepared a docker volume functional-871500
	I1210 05:48:26.764147    3764 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 05:48:26.764147    3764 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 05:48:26.767997    3764 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-871500:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 05:48:41.774041    3764 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-871500:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (15.0058311s)
	I1210 05:48:41.774116    3764 kic.go:203] duration metric: took 15.0097195s to extract preloaded images to volume ...
	I1210 05:48:41.778127    3764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:48:42.016993    3764 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:80 SystemTime:2025-12-10 05:48:41.991841681 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:48:42.020358    3764 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 05:48:42.275312    3764 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-871500 --name functional-871500 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-871500 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-871500 --network functional-871500 --ip 192.168.49.2 --volume functional-871500:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 05:48:42.945090    3764 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Running}}
	I1210 05:48:43.004218    3764 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:48:43.058552    3764 cli_runner.go:164] Run: docker exec functional-871500 stat /var/lib/dpkg/alternatives/iptables
	I1210 05:48:43.163977    3764 oci.go:144] the created container "functional-871500" has a running status.
	I1210 05:48:43.163977    3764 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa...
	I1210 05:48:43.381330    3764 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 05:48:43.455748    3764 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:48:43.522441    3764 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 05:48:43.522441    3764 kic_runner.go:114] Args: [docker exec --privileged functional-871500 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 05:48:43.665130    3764 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa...
	I1210 05:48:45.754297    3764 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:48:45.809104    3764 machine.go:94] provisionDockerMachine start ...
	I1210 05:48:45.812865    3764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:48:45.868409    3764 main.go:143] libmachine: Using SSH client type: native
	I1210 05:48:45.881410    3764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:48:45.881410    3764 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:48:46.050726    3764 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-871500
	
	I1210 05:48:46.050726    3764 ubuntu.go:182] provisioning hostname "functional-871500"
	I1210 05:48:46.054493    3764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:48:46.109020    3764 main.go:143] libmachine: Using SSH client type: native
	I1210 05:48:46.109439    3764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:48:46.109439    3764 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-871500 && echo "functional-871500" | sudo tee /etc/hostname
	I1210 05:48:46.298345    3764 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-871500
	
	I1210 05:48:46.301931    3764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:48:46.357542    3764 main.go:143] libmachine: Using SSH client type: native
	I1210 05:48:46.357542    3764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:48:46.357542    3764 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-871500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-871500/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-871500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:48:46.533142    3764 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:48:46.533142    3764 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 05:48:46.533142    3764 ubuntu.go:190] setting up certificates
	I1210 05:48:46.533142    3764 provision.go:84] configureAuth start
	I1210 05:48:46.536693    3764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 05:48:46.589956    3764 provision.go:143] copyHostCerts
	I1210 05:48:46.589956    3764 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 05:48:46.589956    3764 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 05:48:46.590503    3764 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 05:48:46.591293    3764 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 05:48:46.591293    3764 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 05:48:46.591293    3764 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 05:48:46.592031    3764 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 05:48:46.592031    3764 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 05:48:46.592714    3764 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 05:48:46.593264    3764 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-871500 san=[127.0.0.1 192.168.49.2 functional-871500 localhost minikube]
	I1210 05:48:46.788381    3764 provision.go:177] copyRemoteCerts
	I1210 05:48:46.792381    3764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:48:46.794380    3764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:48:46.847146    3764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:48:46.988532    3764 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 05:48:47.017617    3764 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 05:48:47.043419    3764 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 05:48:47.073167    3764 provision.go:87] duration metric: took 539.9715ms to configureAuth
	I1210 05:48:47.073193    3764 ubuntu.go:206] setting minikube options for container-runtime
	I1210 05:48:47.073696    3764 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 05:48:47.077016    3764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:48:47.132662    3764 main.go:143] libmachine: Using SSH client type: native
	I1210 05:48:47.132832    3764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:48:47.132832    3764 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 05:48:47.313105    3764 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 05:48:47.313105    3764 ubuntu.go:71] root file system type: overlay
	I1210 05:48:47.313105    3764 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 05:48:47.317556    3764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:48:47.375321    3764 main.go:143] libmachine: Using SSH client type: native
	I1210 05:48:47.375321    3764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:48:47.375321    3764 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 05:48:47.565974    3764 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 05:48:47.569991    3764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:48:47.626591    3764 main.go:143] libmachine: Using SSH client type: native
	I1210 05:48:47.626591    3764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:48:47.626591    3764 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 05:48:49.071578    3764 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-10 05:48:47.552156536 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1210 05:48:49.071578    3764 machine.go:97] duration metric: took 3.2624365s to provisionDockerMachine
	I1210 05:48:49.071578    3764 client.go:176] duration metric: took 24.0959619s to LocalClient.Create
	I1210 05:48:49.071578    3764 start.go:167] duration metric: took 24.0959619s to libmachine.API.Create "functional-871500"
	I1210 05:48:49.071578    3764 start.go:293] postStartSetup for "functional-871500" (driver="docker")
	I1210 05:48:49.071578    3764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:48:49.075974    3764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:48:49.079015    3764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:48:49.133086    3764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:48:49.271637    3764 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:48:49.279870    3764 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 05:48:49.279899    3764 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 05:48:49.279923    3764 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 05:48:49.279979    3764 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 05:48:49.280501    3764 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 05:48:49.280600    3764 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts -> hosts in /etc/test/nested/copy/11304
	I1210 05:48:49.285303    3764 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11304
	I1210 05:48:49.298314    3764 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 05:48:49.324099    3764 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts --> /etc/test/nested/copy/11304/hosts (40 bytes)
	I1210 05:48:49.351799    3764 start.go:296] duration metric: took 280.2179ms for postStartSetup
	I1210 05:48:49.357330    3764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 05:48:49.409475    3764 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\config.json ...
	I1210 05:48:49.415495    3764 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:48:49.419135    3764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:48:49.472229    3764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:48:49.597986    3764 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 05:48:49.606508    3764 start.go:128] duration metric: took 24.6363743s to createHost
	I1210 05:48:49.606508    3764 start.go:83] releasing machines lock for "functional-871500", held for 24.6363743s
	I1210 05:48:49.610102    3764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 05:48:49.668416    3764 out.go:179] * Found network options:
	I1210 05:48:49.670769    3764 out.go:179]   - HTTP_PROXY=localhost:50076
	W1210 05:48:49.672990    3764 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1210 05:48:49.676362    3764 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1210 05:48:49.679600    3764 out.go:179]   - HTTP_PROXY=localhost:50076
	I1210 05:48:49.682471    3764 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 05:48:49.686431    3764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:48:49.686431    3764 ssh_runner.go:195] Run: cat /version.json
	I1210 05:48:49.690016    3764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:48:49.739599    3764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:48:49.741452    3764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:48:49.865640    3764 ssh_runner.go:195] Run: systemctl --version
	W1210 05:48:49.867948    3764 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 05:48:49.880670    3764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 05:48:49.890674    3764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:48:49.895242    3764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:48:49.943691    3764 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 05:48:49.943691    3764 start.go:496] detecting cgroup driver to use...
	I1210 05:48:49.943691    3764 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:48:49.943691    3764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:48:49.969712    3764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 05:48:49.988223    3764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1210 05:48:49.994435    3764 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 05:48:49.994435    3764 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 05:48:50.006211    3764 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 05:48:50.010106    3764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 05:48:50.028205    3764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:48:50.047321    3764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 05:48:50.067041    3764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:48:50.087670    3764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:48:50.106301    3764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 05:48:50.124944    3764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 05:48:50.143075    3764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 05:48:50.163687    3764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:48:50.180815    3764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:48:50.198489    3764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:48:50.334813    3764 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 05:48:50.496531    3764 start.go:496] detecting cgroup driver to use...
	I1210 05:48:50.496531    3764 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:48:50.501075    3764 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 05:48:50.526050    3764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 05:48:50.550744    3764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 05:48:50.612921    3764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 05:48:50.635149    3764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 05:48:50.653176    3764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:48:50.679194    3764 ssh_runner.go:195] Run: which cri-dockerd
	I1210 05:48:50.690942    3764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 05:48:50.705074    3764 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 05:48:50.729633    3764 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 05:48:50.874202    3764 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 05:48:51.021234    3764 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 05:48:51.021340    3764 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 05:48:51.046843    3764 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 05:48:51.068479    3764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:48:51.186632    3764 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 05:48:52.059589    3764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:48:52.081491    3764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 05:48:52.104575    3764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 05:48:52.127939    3764 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 05:48:52.275525    3764 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 05:48:52.414160    3764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:48:52.565997    3764 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 05:48:52.591240    3764 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 05:48:52.614316    3764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:48:52.756034    3764 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 05:48:52.860412    3764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 05:48:52.878584    3764 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 05:48:52.884476    3764 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 05:48:52.891540    3764 start.go:564] Will wait 60s for crictl version
	I1210 05:48:52.896129    3764 ssh_runner.go:195] Run: which crictl
	I1210 05:48:52.906958    3764 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 05:48:52.950848    3764 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 05:48:52.954259    3764 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 05:48:52.994984    3764 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 05:48:53.036602    3764 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.2 ...
	I1210 05:48:53.040268    3764 cli_runner.go:164] Run: docker exec -t functional-871500 dig +short host.docker.internal
	I1210 05:48:53.171796    3764 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 05:48:53.175935    3764 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 05:48:53.185671    3764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 05:48:53.203936    3764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 05:48:53.257996    3764 kubeadm.go:884] updating cluster {Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:48:53.257996    3764 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 05:48:53.261745    3764 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 05:48:53.300162    3764 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1210 05:48:53.300162    3764 docker.go:697] registry.k8s.io/etcd:3.6.5-0 wasn't preloaded
	I1210 05:48:53.304127    3764 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1210 05:48:53.321818    3764 ssh_runner.go:195] Run: which lz4
	I1210 05:48:53.333390    3764 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 05:48:53.340423    3764 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 05:48:53.340595    3764 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (284645196 bytes)
	I1210 05:48:56.175078    3764 docker.go:655] duration metric: took 2.8464024s to copy over tarball
	I1210 05:48:56.179683    3764 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 05:48:58.340525    3764 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.1607511s)
	I1210 05:48:58.340525    3764 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 05:48:58.397177    3764 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1210 05:48:58.409280    3764 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2652 bytes)
	I1210 05:48:58.431724    3764 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 05:48:58.453600    3764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:48:58.604637    3764 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 05:49:05.565411    3764 ssh_runner.go:235] Completed: sudo systemctl restart docker: (6.960693s)
	I1210 05:49:05.568903    3764 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 05:49:05.602762    3764 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1210 05:49:05.602762    3764 docker.go:697] registry.k8s.io/etcd:3.6.5-0 wasn't preloaded
	I1210 05:49:05.602762    3764 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 05:49:05.615407    3764 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:49:05.620210    3764 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 05:49:05.625194    3764 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 05:49:05.628560    3764 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:49:05.632329    3764 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 05:49:05.632329    3764 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 05:49:05.637078    3764 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 05:49:05.637168    3764 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 05:49:05.643453    3764 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 05:49:05.643453    3764 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 05:49:05.647211    3764 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 05:49:05.648234    3764 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 05:49:05.651216    3764 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 05:49:05.651216    3764 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 05:49:05.656222    3764 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 05:49:05.659214    3764 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	W1210 05:49:05.687206    3764 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 05:49:05.737995    3764 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 05:49:05.790973    3764 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 05:49:05.844909    3764 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 05:49:05.898741    3764 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 05:49:05.949889    3764 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 05:49:06.001650    3764 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 05:49:06.053112    3764 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 05:49:06.187063    3764 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 05:49:06.188852    3764 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 05:49:06.207374    3764 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 05:49:06.226705    3764 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 05:49:06.244332    3764 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 05:49:06.265252    3764 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 05:49:06.265252    3764 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 05:49:06.265252    3764 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 05:49:06.269684    3764 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1210 05:49:06.297458    3764 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 05:49:06.305336    3764 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 05:49:06.312396    3764 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 05:49:06.315770    3764 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 05:49:06.315770    3764 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 05:49:06.323700    3764 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1210 05:49:06.418273    3764 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:49:06.580271    3764 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 05:49:06.580271    3764 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	I1210 05:49:07.915487    3764 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (1.3352001s)
	I1210 05:49:07.915487    3764 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1210 05:49:07.915487    3764 cache_images.go:125] Successfully loaded all cached images
	I1210 05:49:07.915487    3764 cache_images.go:94] duration metric: took 2.312698s to LoadCachedImages
	I1210 05:49:07.915487    3764 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1210 05:49:07.915487    3764 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-871500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:49:07.920036    3764 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 05:49:07.993974    3764 cni.go:84] Creating CNI manager for ""
	I1210 05:49:07.993974    3764 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 05:49:07.993974    3764 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:49:07.993974    3764 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-871500 NodeName:functional-871500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:49:07.993974    3764 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-871500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:49:07.997874    3764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 05:49:08.011099    3764 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 05:49:08.015360    3764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:49:08.028655    3764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1210 05:49:08.048007    3764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 05:49:08.067196    3764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1210 05:49:08.092513    3764 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 05:49:08.100040    3764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 05:49:08.118816    3764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:49:08.266298    3764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:49:08.288269    3764 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500 for IP: 192.168.49.2
	I1210 05:49:08.288269    3764 certs.go:195] generating shared ca certs ...
	I1210 05:49:08.288269    3764 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:49:08.288943    3764 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 05:49:08.288943    3764 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 05:49:08.288943    3764 certs.go:257] generating profile certs ...
	I1210 05:49:08.289598    3764 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\client.key
	I1210 05:49:08.289598    3764 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\client.crt with IP's: []
	I1210 05:49:08.459212    3764 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\client.crt ...
	I1210 05:49:08.459212    3764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\client.crt: {Name:mk2e26df061bbdbd45fcacfe54d56939522849b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:49:08.460467    3764 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\client.key ...
	I1210 05:49:08.460500    3764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\client.key: {Name:mk046935ea2f4e4f263232d3372ea790d18c035c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:49:08.460976    3764 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key.53a949a1
	I1210 05:49:08.460976    3764 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.crt.53a949a1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1210 05:49:08.503599    3764 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.crt.53a949a1 ...
	I1210 05:49:08.503599    3764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.crt.53a949a1: {Name:mke810229c1661b00fe5dbb800837cdccff980b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:49:08.504599    3764 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key.53a949a1 ...
	I1210 05:49:08.504599    3764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key.53a949a1: {Name:mkfac50df7f1d87cf8fabd257900660c8697a823 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:49:08.505609    3764 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.crt.53a949a1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.crt
	I1210 05:49:08.517614    3764 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key.53a949a1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key
	I1210 05:49:08.518830    3764 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key
	I1210 05:49:08.518870    3764 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.crt with IP's: []
	I1210 05:49:08.570879    3764 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.crt ...
	I1210 05:49:08.570879    3764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.crt: {Name:mk5edfcd521d463315d58692cd82a75c466b068d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:49:08.571727    3764 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key ...
	I1210 05:49:08.571727    3764 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key: {Name:mk46a93b842f604b07a4e250d70e1a94f02552d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:49:08.585723    3764 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 05:49:08.585723    3764 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 05:49:08.585723    3764 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 05:49:08.586725    3764 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 05:49:08.586725    3764 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 05:49:08.586725    3764 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 05:49:08.586725    3764 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 05:49:08.587725    3764 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:49:08.618718    3764 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:49:08.643153    3764 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:49:08.669607    3764 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 05:49:08.698745    3764 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 05:49:08.728173    3764 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 05:49:08.752875    3764 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:49:08.778760    3764 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 05:49:08.807454    3764 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 05:49:08.837929    3764 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:49:08.868207    3764 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 05:49:08.895970    3764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:49:08.922093    3764 ssh_runner.go:195] Run: openssl version
	I1210 05:49:08.935414    3764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 05:49:08.952654    3764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 05:49:08.970473    3764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 05:49:08.978587    3764 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 05:49:08.983136    3764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 05:49:09.031200    3764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 05:49:09.048194    3764 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11304.pem /etc/ssl/certs/51391683.0
	I1210 05:49:09.066275    3764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 05:49:09.083632    3764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 05:49:09.102883    3764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 05:49:09.112105    3764 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 05:49:09.116850    3764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 05:49:09.164042    3764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 05:49:09.180897    3764 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/113042.pem /etc/ssl/certs/3ec20f2e.0
	I1210 05:49:09.198258    3764 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:49:09.215793    3764 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:49:09.233004    3764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:49:09.241839    3764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:49:09.246272    3764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:49:09.293181    3764 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:49:09.310756    3764 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 05:49:09.331358    3764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:49:09.337875    3764 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 05:49:09.338265    3764 kubeadm.go:401] StartCluster: {Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:49:09.343104    3764 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 05:49:09.380933    3764 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:49:09.398246    3764 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 05:49:09.412808    3764 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 05:49:09.417326    3764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 05:49:09.431469    3764 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 05:49:09.431469    3764 kubeadm.go:158] found existing configuration files:
	
	I1210 05:49:09.435705    3764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 05:49:09.449498    3764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 05:49:09.454464    3764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 05:49:09.472998    3764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 05:49:09.487126    3764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 05:49:09.491102    3764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 05:49:09.510001    3764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 05:49:09.524770    3764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 05:49:09.528178    3764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 05:49:09.545145    3764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 05:49:09.558414    3764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 05:49:09.562352    3764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 05:49:09.580496    3764 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 05:49:09.686770    3764 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 05:49:09.773229    3764 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 05:49:09.864269    3764 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 05:53:11.695896    3764 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 05:53:11.695941    3764 kubeadm.go:319] 
	I1210 05:53:11.696067    3764 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 05:53:11.699890    3764 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 05:53:11.700031    3764 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 05:53:11.700031    3764 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 05:53:11.700031    3764 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 05:53:11.700031    3764 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 05:53:11.700031    3764 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 05:53:11.700603    3764 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 05:53:11.700653    3764 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 05:53:11.700725    3764 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 05:53:11.700725    3764 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 05:53:11.700725    3764 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 05:53:11.700725    3764 kubeadm.go:319] CONFIG_INET: enabled
	I1210 05:53:11.700725    3764 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 05:53:11.700725    3764 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 05:53:11.701260    3764 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 05:53:11.701346    3764 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 05:53:11.701522    3764 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 05:53:11.701685    3764 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 05:53:11.701833    3764 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 05:53:11.701912    3764 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 05:53:11.701993    3764 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 05:53:11.702071    3764 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 05:53:11.702154    3764 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 05:53:11.702649    3764 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 05:53:11.702649    3764 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 05:53:11.702649    3764 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 05:53:11.702649    3764 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 05:53:11.702649    3764 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 05:53:11.702649    3764 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 05:53:11.703262    3764 kubeadm.go:319] OS: Linux
	I1210 05:53:11.703262    3764 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 05:53:11.703262    3764 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 05:53:11.703262    3764 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 05:53:11.703262    3764 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 05:53:11.703262    3764 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 05:53:11.703777    3764 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 05:53:11.703888    3764 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 05:53:11.703888    3764 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 05:53:11.703888    3764 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 05:53:11.703888    3764 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 05:53:11.703888    3764 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 05:53:11.704417    3764 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 05:53:11.704504    3764 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 05:53:11.707493    3764 out.go:252]   - Generating certificates and keys ...
	I1210 05:53:11.708061    3764 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 05:53:11.708061    3764 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 05:53:11.708061    3764 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 05:53:11.708061    3764 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 05:53:11.708061    3764 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 05:53:11.708583    3764 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 05:53:11.708625    3764 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 05:53:11.708625    3764 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-871500 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 05:53:11.708625    3764 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 05:53:11.709197    3764 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-871500 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 05:53:11.709270    3764 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 05:53:11.709270    3764 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 05:53:11.709270    3764 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 05:53:11.709270    3764 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 05:53:11.709270    3764 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 05:53:11.709270    3764 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 05:53:11.709270    3764 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 05:53:11.709270    3764 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 05:53:11.709270    3764 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 05:53:11.709270    3764 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 05:53:11.710226    3764 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 05:53:11.713329    3764 out.go:252]   - Booting up control plane ...
	I1210 05:53:11.713329    3764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 05:53:11.714356    3764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 05:53:11.714356    3764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 05:53:11.714356    3764 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 05:53:11.714356    3764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 05:53:11.714356    3764 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 05:53:11.714356    3764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 05:53:11.714356    3764 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 05:53:11.715428    3764 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 05:53:11.715577    3764 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 05:53:11.715577    3764 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000734582s
	I1210 05:53:11.715577    3764 kubeadm.go:319] 
	I1210 05:53:11.715577    3764 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 05:53:11.715577    3764 kubeadm.go:319] 	- The kubelet is not running
	I1210 05:53:11.715577    3764 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 05:53:11.715577    3764 kubeadm.go:319] 
	I1210 05:53:11.715577    3764 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 05:53:11.715577    3764 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 05:53:11.715577    3764 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 05:53:11.715577    3764 kubeadm.go:319] 
	W1210 05:53:11.715577    3764 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-871500 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-871500 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000734582s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 05:53:11.719957    3764 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1210 05:53:12.180827    3764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 05:53:12.199689    3764 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 05:53:12.204068    3764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 05:53:12.216520    3764 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 05:53:12.216520    3764 kubeadm.go:158] found existing configuration files:
	
	I1210 05:53:12.220659    3764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 05:53:12.233086    3764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 05:53:12.237826    3764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 05:53:12.257298    3764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 05:53:12.271628    3764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 05:53:12.275737    3764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 05:53:12.293241    3764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 05:53:12.304372    3764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 05:53:12.308172    3764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 05:53:12.326244    3764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 05:53:12.339700    3764 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 05:53:12.342697    3764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 05:53:12.363028    3764 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 05:53:12.476033    3764 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 05:53:12.560616    3764 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 05:53:12.657262    3764 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 05:57:13.509526    3764 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 05:57:13.509526    3764 kubeadm.go:319] 
	I1210 05:57:13.510161    3764 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 05:57:13.517650    3764 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 05:57:13.517650    3764 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 05:57:13.517650    3764 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 05:57:13.517650    3764 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 05:57:13.517650    3764 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 05:57:13.517650    3764 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 05:57:13.517650    3764 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 05:57:13.518511    3764 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 05:57:13.518511    3764 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 05:57:13.518511    3764 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 05:57:13.518511    3764 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 05:57:13.518511    3764 kubeadm.go:319] CONFIG_INET: enabled
	I1210 05:57:13.518511    3764 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 05:57:13.518511    3764 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 05:57:13.518511    3764 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 05:57:13.518511    3764 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 05:57:13.518511    3764 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 05:57:13.518511    3764 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 05:57:13.518511    3764 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 05:57:13.519543    3764 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 05:57:13.519543    3764 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 05:57:13.519543    3764 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 05:57:13.519543    3764 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 05:57:13.519543    3764 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 05:57:13.519543    3764 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 05:57:13.520127    3764 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 05:57:13.520153    3764 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 05:57:13.520153    3764 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 05:57:13.520153    3764 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 05:57:13.520153    3764 kubeadm.go:319] OS: Linux
	I1210 05:57:13.520153    3764 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 05:57:13.520153    3764 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 05:57:13.520153    3764 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 05:57:13.520742    3764 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 05:57:13.520770    3764 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 05:57:13.520900    3764 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 05:57:13.520900    3764 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 05:57:13.521044    3764 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 05:57:13.521183    3764 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 05:57:13.521335    3764 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 05:57:13.521499    3764 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 05:57:13.521601    3764 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 05:57:13.521601    3764 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 05:57:13.526177    3764 out.go:252]   - Generating certificates and keys ...
	I1210 05:57:13.526290    3764 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 05:57:13.526371    3764 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 05:57:13.526371    3764 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 05:57:13.526371    3764 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 05:57:13.526371    3764 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 05:57:13.526371    3764 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 05:57:13.526371    3764 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 05:57:13.526935    3764 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 05:57:13.526977    3764 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 05:57:13.526977    3764 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 05:57:13.526977    3764 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 05:57:13.526977    3764 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 05:57:13.526977    3764 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 05:57:13.527552    3764 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 05:57:13.527552    3764 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 05:57:13.527552    3764 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 05:57:13.527552    3764 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 05:57:13.527552    3764 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 05:57:13.528084    3764 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 05:57:13.529756    3764 out.go:252]   - Booting up control plane ...
	I1210 05:57:13.530760    3764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 05:57:13.530760    3764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 05:57:13.530760    3764 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 05:57:13.530760    3764 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 05:57:13.530760    3764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 05:57:13.530760    3764 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 05:57:13.530760    3764 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 05:57:13.530760    3764 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 05:57:13.531767    3764 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 05:57:13.531767    3764 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 05:57:13.531767    3764 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000709985s
	I1210 05:57:13.531767    3764 kubeadm.go:319] 
	I1210 05:57:13.531767    3764 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 05:57:13.531767    3764 kubeadm.go:319] 	- The kubelet is not running
	I1210 05:57:13.531767    3764 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 05:57:13.531767    3764 kubeadm.go:319] 
	I1210 05:57:13.531767    3764 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 05:57:13.531767    3764 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 05:57:13.532762    3764 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 05:57:13.532762    3764 kubeadm.go:319] 
	I1210 05:57:13.532762    3764 kubeadm.go:403] duration metric: took 8m4.1887072s to StartCluster
	I1210 05:57:13.532762    3764 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 05:57:13.536756    3764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 05:57:13.598067    3764 cri.go:89] found id: ""
	I1210 05:57:13.598067    3764 logs.go:282] 0 containers: []
	W1210 05:57:13.598067    3764 logs.go:284] No container was found matching "kube-apiserver"
	I1210 05:57:13.598067    3764 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 05:57:13.602568    3764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 05:57:13.646274    3764 cri.go:89] found id: ""
	I1210 05:57:13.646274    3764 logs.go:282] 0 containers: []
	W1210 05:57:13.646274    3764 logs.go:284] No container was found matching "etcd"
	I1210 05:57:13.646274    3764 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 05:57:13.651084    3764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 05:57:13.695623    3764 cri.go:89] found id: ""
	I1210 05:57:13.695623    3764 logs.go:282] 0 containers: []
	W1210 05:57:13.695623    3764 logs.go:284] No container was found matching "coredns"
	I1210 05:57:13.695623    3764 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 05:57:13.700322    3764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 05:57:13.737787    3764 cri.go:89] found id: ""
	I1210 05:57:13.737787    3764 logs.go:282] 0 containers: []
	W1210 05:57:13.737787    3764 logs.go:284] No container was found matching "kube-scheduler"
	I1210 05:57:13.737787    3764 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 05:57:13.742496    3764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 05:57:13.781708    3764 cri.go:89] found id: ""
	I1210 05:57:13.781708    3764 logs.go:282] 0 containers: []
	W1210 05:57:13.781708    3764 logs.go:284] No container was found matching "kube-proxy"
	I1210 05:57:13.781708    3764 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 05:57:13.786856    3764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 05:57:13.829821    3764 cri.go:89] found id: ""
	I1210 05:57:13.829821    3764 logs.go:282] 0 containers: []
	W1210 05:57:13.829821    3764 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 05:57:13.829821    3764 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 05:57:13.834770    3764 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 05:57:13.878992    3764 cri.go:89] found id: ""
	I1210 05:57:13.878992    3764 logs.go:282] 0 containers: []
	W1210 05:57:13.878992    3764 logs.go:284] No container was found matching "kindnet"
	I1210 05:57:13.878992    3764 logs.go:123] Gathering logs for dmesg ...
	I1210 05:57:13.878992    3764 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 05:57:13.919552    3764 logs.go:123] Gathering logs for describe nodes ...
	I1210 05:57:13.919552    3764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 05:57:14.001796    3764 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:13.991133   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:13.991810   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:13.994105   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:13.994936   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:13.997420   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 05:57:13.991133   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:13.991810   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:13.994105   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:13.994936   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:13.997420   10159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 05:57:14.001880    3764 logs.go:123] Gathering logs for Docker ...
	I1210 05:57:14.001880    3764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 05:57:14.033789    3764 logs.go:123] Gathering logs for container status ...
	I1210 05:57:14.033789    3764 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 05:57:14.078903    3764 logs.go:123] Gathering logs for kubelet ...
	I1210 05:57:14.078903    3764 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 05:57:14.140132    3764 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000709985s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 05:57:14.140132    3764 out.go:285] * 
	W1210 05:57:14.140132    3764 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000709985s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 05:57:14.140132    3764 out.go:285] * 
	W1210 05:57:14.142248    3764 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 05:57:14.148370    3764 out.go:203] 
	W1210 05:57:14.152653    3764 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000709985s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 05:57:14.153154    3764 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 05:57:14.153241    3764 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 05:57:14.156423    3764 out.go:203] 
	
	
	==> Docker <==
	Dec 10 05:48:58 functional-871500 systemd[1]: Starting docker.service - Docker Application Container Engine...
	Dec 10 05:48:58 functional-871500 dockerd[1642]: time="2025-12-10T05:48:58.683661078Z" level=info msg="Starting up"
	Dec 10 05:48:58 functional-871500 dockerd[1642]: time="2025-12-10T05:48:58.704723262Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	Dec 10 05:48:58 functional-871500 dockerd[1642]: time="2025-12-10T05:48:58.704894077Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	Dec 10 05:48:58 functional-871500 dockerd[1642]: time="2025-12-10T05:48:58.704908378Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	Dec 10 05:48:58 functional-871500 dockerd[1642]: time="2025-12-10T05:48:58.719319298Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	Dec 10 05:48:58 functional-871500 dockerd[1642]: time="2025-12-10T05:48:58.736355341Z" level=info msg="Loading containers: start."
	Dec 10 05:48:58 functional-871500 dockerd[1642]: time="2025-12-10T05:48:58.736451149Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 10 05:49:05 functional-871500 dockerd[1642]: time="2025-12-10T05:49:05.028317044Z" level=info msg="Restoring containers: start."
	Dec 10 05:49:05 functional-871500 dockerd[1642]: time="2025-12-10T05:49:05.081324637Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 10 05:49:05 functional-871500 dockerd[1642]: time="2025-12-10T05:49:05.121296724Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 10 05:49:05 functional-871500 dockerd[1642]: time="2025-12-10T05:49:05.435029216Z" level=info msg="Loading containers: done."
	Dec 10 05:49:05 functional-871500 dockerd[1642]: time="2025-12-10T05:49:05.455925487Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 10 05:49:05 functional-871500 dockerd[1642]: time="2025-12-10T05:49:05.456016395Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 10 05:49:05 functional-871500 dockerd[1642]: time="2025-12-10T05:49:05.456026496Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 10 05:49:05 functional-871500 dockerd[1642]: time="2025-12-10T05:49:05.456032096Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 05:49:05 functional-871500 dockerd[1642]: time="2025-12-10T05:49:05.456038797Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 05:49:05 functional-871500 dockerd[1642]: time="2025-12-10T05:49:05.456213511Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 05:49:05 functional-871500 dockerd[1642]: time="2025-12-10T05:49:05.456299419Z" level=info msg="Initializing buildkit"
	Dec 10 05:49:05 functional-871500 dockerd[1642]: time="2025-12-10T05:49:05.552407765Z" level=info msg="Completed buildkit initialization"
	Dec 10 05:49:05 functional-871500 dockerd[1642]: time="2025-12-10T05:49:05.561435230Z" level=info msg="Daemon has completed initialization"
	Dec 10 05:49:05 functional-871500 dockerd[1642]: time="2025-12-10T05:49:05.561628746Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 05:49:05 functional-871500 dockerd[1642]: time="2025-12-10T05:49:05.561660149Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 05:49:05 functional-871500 dockerd[1642]: time="2025-12-10T05:49:05.561761858Z" level=info msg="API listen on [::]:2376"
	Dec 10 05:49:05 functional-871500 systemd[1]: Started docker.service - Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 05:57:15.823714   10317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.824421   10317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.827188   10317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.828257   10317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 05:57:15.829576   10317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001036] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000974] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000916] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000962] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000947] FS:  0000000000000000 GS:  0000000000000000
	[  +6.857578] CPU: 3 PID: 46026 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000913] RIP: 0033:0x7fbfebe77b20
	[  +0.000404] Code: Unable to access opcode bytes at RIP 0x7fbfebe77af6.
	[  +0.000640] RSP: 002b:00007fffb111dc60 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000771] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000754] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000763] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001229] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001214] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001385] FS:  0000000000000000 GS:  0000000000000000
	[  +0.806334] CPU: 14 PID: 46138 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.001013] RIP: 0033:0x7f44d4857b20
	[  +0.000506] Code: Unable to access opcode bytes at RIP 0x7f44d4857af6.
	[  +0.000875] RSP: 002b:00007fffdd744d10 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000777] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.001083] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001015] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000962] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000877] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 05:57:15 up  1:25,  0 user,  load average: 0.45, 0.52, 0.82
	Linux functional-871500 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 05:57:12 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:57:13 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 10 05:57:13 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:57:13 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:57:13 functional-871500 kubelet[10041]: E1210 05:57:13.120565   10041 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:57:13 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:57:13 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:57:13 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 10 05:57:13 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:57:13 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:57:13 functional-871500 kubelet[10119]: E1210 05:57:13.874069   10119 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:57:13 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:57:13 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:57:14 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 10 05:57:14 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:57:14 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:57:14 functional-871500 kubelet[10184]: E1210 05:57:14.660865   10184 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:57:14 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:57:14 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 05:57:15 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 10 05:57:15 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:57:15 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 05:57:15 functional-871500 kubelet[10211]: E1210 05:57:15.380408   10211 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 05:57:15 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 05:57:15 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500: exit status 6 (560.6884ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 05:57:16.757891    9712 status.go:458] kubeconfig endpoint: get endpoint: "functional-871500" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-871500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (532.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (373.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1210 05:57:16.805136   11304 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-871500 --alsologtostderr -v=8
E1210 05:58:02.265000   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:58:29.968951   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:59:45.897395   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:03:02.269446   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-871500 --alsologtostderr -v=8: exit status 80 (6m9.3802915s)

                                                
                                                
-- stdout --
	* [functional-871500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "functional-871500" primary control-plane node in "functional-871500" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:57:16.875847    3528 out.go:360] Setting OutFile to fd 1624 ...
	I1210 05:57:16.917657    3528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:57:16.917657    3528 out.go:374] Setting ErrFile to fd 1612...
	I1210 05:57:16.917657    3528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:57:16.932616    3528 out.go:368] Setting JSON to false
	I1210 05:57:16.934770    3528 start.go:133] hostinfo: {"hostname":"minikube4","uptime":5168,"bootTime":1765341068,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 05:57:16.934770    3528 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 05:57:16.939605    3528 out.go:179] * [functional-871500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 05:57:16.942014    3528 notify.go:221] Checking for updates...
	I1210 05:57:16.946622    3528 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:16.950394    3528 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:57:16.952350    3528 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 05:57:16.955212    3528 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:57:16.957439    3528 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:57:16.962034    3528 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 05:57:16.962229    3528 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:57:17.077929    3528 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 05:57:17.082453    3528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:57:17.310960    3528 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-10 05:57:17.287646185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:57:17.314972    3528 out.go:179] * Using the docker driver based on existing profile
	I1210 05:57:17.316973    3528 start.go:309] selected driver: docker
	I1210 05:57:17.316973    3528 start.go:927] validating driver "docker" against &{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:57:17.316973    3528 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:57:17.322956    3528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:57:17.562979    3528 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-10 05:57:17.536373793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:57:17.650233    3528 cni.go:84] Creating CNI manager for ""
	I1210 05:57:17.650233    3528 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 05:57:17.650860    3528 start.go:353] cluster config:
	{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:57:17.654219    3528 out.go:179] * Starting "functional-871500" primary control-plane node in "functional-871500" cluster
	I1210 05:57:17.656244    3528 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 05:57:17.659128    3528 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:57:17.661459    3528 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:57:17.661459    3528 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 05:57:17.661583    3528 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1210 05:57:17.661583    3528 cache.go:65] Caching tarball of preloaded images
	I1210 05:57:17.661583    3528 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1210 05:57:17.662115    3528 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1210 05:57:17.662465    3528 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\config.json ...
	I1210 05:57:17.734611    3528 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 05:57:17.734611    3528 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 05:57:17.734611    3528 cache.go:243] Successfully downloaded all kic artifacts
	I1210 05:57:17.734611    3528 start.go:360] acquireMachinesLock for functional-871500: {Name:mkaa7072cf669cebcb93feb3e66bf80897472d33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:57:17.735277    3528 start.go:364] duration metric: took 104.4µs to acquireMachinesLock for "functional-871500"
	I1210 05:57:17.735336    3528 start.go:96] Skipping create...Using existing machine configuration
	I1210 05:57:17.735336    3528 fix.go:54] fixHost starting: 
	I1210 05:57:17.741445    3528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:57:17.794847    3528 fix.go:112] recreateIfNeeded on functional-871500: state=Running err=<nil>
	W1210 05:57:17.794847    3528 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 05:57:17.798233    3528 out.go:252] * Updating the running docker "functional-871500" container ...
	I1210 05:57:17.798233    3528 machine.go:94] provisionDockerMachine start ...
	I1210 05:57:17.802052    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:17.859397    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:17.860025    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:17.860025    3528 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:57:18.039007    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-871500
	
	I1210 05:57:18.039007    3528 ubuntu.go:182] provisioning hostname "functional-871500"
	I1210 05:57:18.043768    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:18.100666    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:18.100666    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:18.100666    3528 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-871500 && echo "functional-871500" | sudo tee /etc/hostname
	I1210 05:57:18.283797    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-871500
	
	I1210 05:57:18.287904    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:18.342863    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:18.343348    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:18.343409    3528 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-871500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-871500/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-871500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:57:18.533020    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:57:18.533020    3528 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 05:57:18.533020    3528 ubuntu.go:190] setting up certificates
	I1210 05:57:18.533020    3528 provision.go:84] configureAuth start
	I1210 05:57:18.537250    3528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 05:57:18.595140    3528 provision.go:143] copyHostCerts
	I1210 05:57:18.595839    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1210 05:57:18.596031    3528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 05:57:18.596062    3528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 05:57:18.596239    3528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 05:57:18.596845    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1210 05:57:18.597366    3528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 05:57:18.597406    3528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 05:57:18.597495    3528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 05:57:18.598291    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1210 05:57:18.598291    3528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 05:57:18.598291    3528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 05:57:18.598291    3528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 05:57:18.599093    3528 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-871500 san=[127.0.0.1 192.168.49.2 functional-871500 localhost minikube]
	I1210 05:57:18.702479    3528 provision.go:177] copyRemoteCerts
	I1210 05:57:18.706176    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:57:18.709177    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:18.761464    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:18.886181    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1210 05:57:18.886181    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 05:57:18.914027    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1210 05:57:18.914027    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 05:57:18.939266    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1210 05:57:18.939794    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 05:57:18.968597    3528 provision.go:87] duration metric: took 435.5446ms to configureAuth
	I1210 05:57:18.968633    3528 ubuntu.go:206] setting minikube options for container-runtime
	I1210 05:57:18.969064    3528 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 05:57:18.972714    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:19.026843    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:19.027475    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:19.027475    3528 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 05:57:19.213570    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 05:57:19.213570    3528 ubuntu.go:71] root file system type: overlay
	I1210 05:57:19.213570    3528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 05:57:19.217470    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:19.271762    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:19.271762    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:19.271762    3528 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 05:57:19.465304    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 05:57:19.469988    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:19.524496    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:19.525153    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:19.525153    3528 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 05:57:19.708281    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:57:19.708281    3528 machine.go:97] duration metric: took 1.9100246s to provisionDockerMachine
	I1210 05:57:19.708281    3528 start.go:293] postStartSetup for "functional-871500" (driver="docker")
	I1210 05:57:19.708281    3528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:57:19.712864    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:57:19.716356    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:19.769263    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:19.910607    3528 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:57:19.918702    3528 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 05:57:19.918702    3528 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 05:57:19.918702    3528 command_runner.go:130] > VERSION_ID="12"
	I1210 05:57:19.918702    3528 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 05:57:19.918702    3528 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 05:57:19.918702    3528 command_runner.go:130] > ID=debian
	I1210 05:57:19.918702    3528 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 05:57:19.918702    3528 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 05:57:19.918702    3528 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 05:57:19.918927    3528 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 05:57:19.919018    3528 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 05:57:19.919060    3528 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 05:57:19.919569    3528 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 05:57:19.919739    3528 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 05:57:19.919739    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> /etc/ssl/certs/113042.pem
	I1210 05:57:19.921060    3528 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts -> hosts in /etc/test/nested/copy/11304
	I1210 05:57:19.921102    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts -> /etc/test/nested/copy/11304/hosts
	I1210 05:57:19.926330    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11304
	I1210 05:57:19.937995    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 05:57:19.967462    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts --> /etc/test/nested/copy/11304/hosts (40 bytes)
	I1210 05:57:19.996671    3528 start.go:296] duration metric: took 288.3864ms for postStartSetup
	I1210 05:57:20.001220    3528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:57:20.004094    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:20.057975    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:20.183984    3528 command_runner.go:130] > 1%
	I1210 05:57:20.188612    3528 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 05:57:20.199532    3528 command_runner.go:130] > 950G
	I1210 05:57:20.200170    3528 fix.go:56] duration metric: took 2.4648044s for fixHost
	I1210 05:57:20.200170    3528 start.go:83] releasing machines lock for "functional-871500", held for 2.4648316s
	I1210 05:57:20.204329    3528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 05:57:20.260852    3528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 05:57:20.265678    3528 ssh_runner.go:195] Run: cat /version.json
	I1210 05:57:20.265678    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:20.268055    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:20.318377    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:20.318938    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:20.440815    3528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1210 05:57:20.440815    3528 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 05:57:20.448568    3528 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1210 05:57:20.452774    3528 ssh_runner.go:195] Run: systemctl --version
	I1210 05:57:20.464224    3528 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 05:57:20.464224    3528 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 05:57:20.469738    3528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 05:57:20.478403    3528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 05:57:20.478403    3528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:57:20.483606    3528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:57:20.495780    3528 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 05:57:20.495780    3528 start.go:496] detecting cgroup driver to use...
	I1210 05:57:20.495780    3528 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:57:20.495780    3528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:57:20.518759    3528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1210 05:57:20.523282    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 05:57:20.541393    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1210 05:57:20.546364    3528 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 05:57:20.546364    3528 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 05:57:20.557861    3528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 05:57:20.562880    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 05:57:20.580735    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:57:20.598803    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 05:57:20.615367    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:57:20.637025    3528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:57:20.656757    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 05:57:20.676589    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 05:57:20.695912    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 05:57:20.717653    3528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:57:20.732788    3528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 05:57:20.737410    3528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:57:20.756411    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:20.908020    3528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 05:57:21.078402    3528 start.go:496] detecting cgroup driver to use...
	I1210 05:57:21.078402    3528 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:57:21.083945    3528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 05:57:21.102632    3528 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1210 05:57:21.102632    3528 command_runner.go:130] > [Unit]
	I1210 05:57:21.102632    3528 command_runner.go:130] > Description=Docker Application Container Engine
	I1210 05:57:21.102632    3528 command_runner.go:130] > Documentation=https://docs.docker.com
	I1210 05:57:21.102632    3528 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1210 05:57:21.102632    3528 command_runner.go:130] > Wants=network-online.target containerd.service
	I1210 05:57:21.102632    3528 command_runner.go:130] > Requires=docker.socket
	I1210 05:57:21.102632    3528 command_runner.go:130] > StartLimitBurst=3
	I1210 05:57:21.102632    3528 command_runner.go:130] > StartLimitIntervalSec=60
	I1210 05:57:21.102632    3528 command_runner.go:130] > [Service]
	I1210 05:57:21.102632    3528 command_runner.go:130] > Type=notify
	I1210 05:57:21.102632    3528 command_runner.go:130] > Restart=always
	I1210 05:57:21.102632    3528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1210 05:57:21.102632    3528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1210 05:57:21.102632    3528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1210 05:57:21.102632    3528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1210 05:57:21.102632    3528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1210 05:57:21.102632    3528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1210 05:57:21.102632    3528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1210 05:57:21.102632    3528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1210 05:57:21.102632    3528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1210 05:57:21.102632    3528 command_runner.go:130] > ExecStart=
	I1210 05:57:21.103158    3528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1210 05:57:21.103158    3528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1210 05:57:21.103158    3528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1210 05:57:21.103158    3528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1210 05:57:21.103158    3528 command_runner.go:130] > LimitNOFILE=infinity
	I1210 05:57:21.103158    3528 command_runner.go:130] > LimitNPROC=infinity
	I1210 05:57:21.103378    3528 command_runner.go:130] > LimitCORE=infinity
	I1210 05:57:21.103378    3528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1210 05:57:21.103378    3528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1210 05:57:21.103378    3528 command_runner.go:130] > TasksMax=infinity
	I1210 05:57:21.103378    3528 command_runner.go:130] > TimeoutStartSec=0
	I1210 05:57:21.103378    3528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1210 05:57:21.103378    3528 command_runner.go:130] > Delegate=yes
	I1210 05:57:21.103378    3528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1210 05:57:21.103378    3528 command_runner.go:130] > KillMode=process
	I1210 05:57:21.103378    3528 command_runner.go:130] > OOMScoreAdjust=-500
	I1210 05:57:21.103378    3528 command_runner.go:130] > [Install]
	I1210 05:57:21.103378    3528 command_runner.go:130] > WantedBy=multi-user.target
	I1210 05:57:21.111084    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 05:57:21.134007    3528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 05:57:21.193270    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 05:57:21.218062    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 05:57:21.240026    3528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:57:21.262345    3528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1210 05:57:21.267460    3528 ssh_runner.go:195] Run: which cri-dockerd
	I1210 05:57:21.274915    3528 command_runner.go:130] > /usr/bin/cri-dockerd
	I1210 05:57:21.278860    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 05:57:21.290698    3528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 05:57:21.314565    3528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 05:57:21.466409    3528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 05:57:21.603844    3528 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 05:57:21.603844    3528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 05:57:21.630009    3528 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 05:57:21.650723    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:21.786633    3528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 05:57:22.595739    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:57:22.618130    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 05:57:22.639399    3528 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1210 05:57:22.666084    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 05:57:22.689760    3528 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 05:57:22.826287    3528 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 05:57:22.966482    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:23.147658    3528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 05:57:23.173945    3528 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 05:57:23.199471    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:23.338742    3528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 05:57:23.455945    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 05:57:23.474438    3528 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 05:57:23.478444    3528 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 05:57:23.486000    3528 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1210 05:57:23.486000    3528 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 05:57:23.486000    3528 command_runner.go:130] > Device: 0,112	Inode: 1768        Links: 1
	I1210 05:57:23.486000    3528 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1210 05:57:23.486000    3528 command_runner.go:130] > Access: 2025-12-10 05:57:23.343769342 +0000
	I1210 05:57:23.486000    3528 command_runner.go:130] > Modify: 2025-12-10 05:57:23.343769342 +0000
	I1210 05:57:23.486000    3528 command_runner.go:130] > Change: 2025-12-10 05:57:23.343769342 +0000
	I1210 05:57:23.486000    3528 command_runner.go:130] >  Birth: -
	I1210 05:57:23.486000    3528 start.go:564] Will wait 60s for crictl version
	I1210 05:57:23.490664    3528 ssh_runner.go:195] Run: which crictl
	I1210 05:57:23.496443    3528 command_runner.go:130] > /usr/local/bin/crictl
	I1210 05:57:23.501067    3528 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 05:57:23.549049    3528 command_runner.go:130] > Version:  0.1.0
	I1210 05:57:23.549049    3528 command_runner.go:130] > RuntimeName:  docker
	I1210 05:57:23.549049    3528 command_runner.go:130] > RuntimeVersion:  29.1.2
	I1210 05:57:23.549049    3528 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 05:57:23.549049    3528 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 05:57:23.552780    3528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 05:57:23.592051    3528 command_runner.go:130] > 29.1.2
	I1210 05:57:23.595007    3528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 05:57:23.630739    3528 command_runner.go:130] > 29.1.2
	I1210 05:57:23.635076    3528 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.2 ...
	I1210 05:57:23.638761    3528 cli_runner.go:164] Run: docker exec -t functional-871500 dig +short host.docker.internal
	I1210 05:57:23.765960    3528 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 05:57:23.770487    3528 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 05:57:23.780262    3528 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1210 05:57:23.784121    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:23.838579    3528 kubeadm.go:884] updating cluster {Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:57:23.838579    3528 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 05:57:23.841570    3528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:57:23.871575    3528 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1210 05:57:23.871575    3528 docker.go:621] Images already preloaded, skipping extraction
	I1210 05:57:23.875579    3528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:57:23.907148    3528 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1210 05:57:23.907148    3528 cache_images.go:86] Images are preloaded, skipping loading
	I1210 05:57:23.907148    3528 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1210 05:57:23.907668    3528 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-871500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:57:23.911609    3528 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 05:57:23.978720    3528 command_runner.go:130] > cgroupfs
	I1210 05:57:23.983482    3528 cni.go:84] Creating CNI manager for ""
	I1210 05:57:23.983482    3528 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 05:57:23.983482    3528 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:57:23.983482    3528 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-871500 NodeName:functional-871500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:57:23.983482    3528 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-871500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:57:23.987498    3528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 05:57:24.000182    3528 command_runner.go:130] > kubeadm
	I1210 05:57:24.000182    3528 command_runner.go:130] > kubectl
	I1210 05:57:24.000182    3528 command_runner.go:130] > kubelet
	I1210 05:57:24.000182    3528 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 05:57:24.004093    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:57:24.018408    3528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1210 05:57:24.041215    3528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 05:57:24.061272    3528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1210 05:57:24.082615    3528 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 05:57:24.095804    3528 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 05:57:24.101162    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:24.247994    3528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:57:24.548481    3528 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500 for IP: 192.168.49.2
	I1210 05:57:24.548481    3528 certs.go:195] generating shared ca certs ...
	I1210 05:57:24.549012    3528 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:57:24.549698    3528 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 05:57:24.549774    3528 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 05:57:24.549774    3528 certs.go:257] generating profile certs ...
	I1210 05:57:24.550590    3528 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\client.key
	I1210 05:57:24.550932    3528 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key.53a949a1
	I1210 05:57:24.550932    3528 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key
	I1210 05:57:24.550932    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 05:57:24.550932    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1210 05:57:24.550932    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 05:57:24.551460    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 05:57:24.551604    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 05:57:24.551764    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 05:57:24.551869    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 05:57:24.552075    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 05:57:24.552075    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 05:57:24.552075    3528 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 05:57:24.552617    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 05:57:24.552685    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 05:57:24.552685    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 05:57:24.552685    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 05:57:24.553394    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 05:57:24.553588    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.553766    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem -> /usr/share/ca-certificates/11304.pem
	I1210 05:57:24.553869    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> /usr/share/ca-certificates/113042.pem
	I1210 05:57:24.554786    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:57:24.581958    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:57:24.609312    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:57:24.634601    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 05:57:24.661713    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 05:57:24.690256    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 05:57:24.717784    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:57:24.748075    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 05:57:24.779590    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:57:24.808619    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 05:57:24.838348    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 05:57:24.862790    3528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:57:24.888297    3528 ssh_runner.go:195] Run: openssl version
	I1210 05:57:24.898078    3528 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 05:57:24.902400    3528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.918304    3528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:57:24.936062    3528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.946045    3528 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.946080    3528 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.950017    3528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.993898    3528 command_runner.go:130] > b5213941
	I1210 05:57:24.999156    3528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:57:25.016159    3528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.034260    3528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 05:57:25.053147    3528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.061814    3528 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.061814    3528 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.065786    3528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.108176    3528 command_runner.go:130] > 51391683
	I1210 05:57:25.113321    3528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 05:57:25.129918    3528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.147630    3528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 05:57:25.167521    3528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.176125    3528 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.176125    3528 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.180991    3528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.223232    3528 command_runner.go:130] > 3ec20f2e
	I1210 05:57:25.227937    3528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 05:57:25.244300    3528 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:57:25.251407    3528 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:57:25.251407    3528 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 05:57:25.251407    3528 command_runner.go:130] > Device: 8,48	Inode: 15342       Links: 1
	I1210 05:57:25.251407    3528 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 05:57:25.251407    3528 command_runner.go:130] > Access: 2025-12-10 05:53:12.664767007 +0000
	I1210 05:57:25.251407    3528 command_runner.go:130] > Modify: 2025-12-10 05:49:10.064289884 +0000
	I1210 05:57:25.251407    3528 command_runner.go:130] > Change: 2025-12-10 05:49:10.064289884 +0000
	I1210 05:57:25.251407    3528 command_runner.go:130] >  Birth: 2025-12-10 05:49:10.064289884 +0000
	I1210 05:57:25.255353    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 05:57:25.300587    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.306046    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 05:57:25.348642    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.354977    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 05:57:25.399294    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.403503    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 05:57:25.448300    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.453152    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 05:57:25.506357    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.511028    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 05:57:25.553903    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.554908    3528 kubeadm.go:401] StartCluster: {Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:57:25.558842    3528 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 05:57:25.593738    3528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:57:25.607577    3528 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 05:57:25.607628    3528 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 05:57:25.607628    3528 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 05:57:25.607628    3528 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 05:57:25.607628    3528 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 05:57:25.611091    3528 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 05:57:25.623212    3528 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:57:25.626623    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:25.680358    3528 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-871500" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:25.681186    3528 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-871500" cluster setting kubeconfig missing "functional-871500" context setting]
	I1210 05:57:25.681273    3528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:57:25.700123    3528 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:25.700864    3528 kapi.go:59] client config for functional-871500: &rest.Config{Host:"https://127.0.0.1:50086", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff61ff39080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 05:57:25.702157    3528 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 05:57:25.702219    3528 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 05:57:25.702289    3528 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 05:57:25.702331    3528 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 05:57:25.702331    3528 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 05:57:25.702331    3528 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 05:57:25.706500    3528 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 05:57:25.721533    3528 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1210 05:57:25.721533    3528 kubeadm.go:602] duration metric: took 113.9037ms to restartPrimaryControlPlane
	I1210 05:57:25.721533    3528 kubeadm.go:403] duration metric: took 166.6224ms to StartCluster
	I1210 05:57:25.721533    3528 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:57:25.721533    3528 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:25.722880    3528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:57:25.723468    3528 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 05:57:25.723468    3528 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 05:57:25.723468    3528 addons.go:70] Setting storage-provisioner=true in profile "functional-871500"
	I1210 05:57:25.723468    3528 addons.go:70] Setting default-storageclass=true in profile "functional-871500"
	I1210 05:57:25.723468    3528 addons.go:239] Setting addon storage-provisioner=true in "functional-871500"
	I1210 05:57:25.723990    3528 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-871500"
	I1210 05:57:25.723990    3528 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 05:57:25.724039    3528 host.go:66] Checking if "functional-871500" exists ...
	I1210 05:57:25.727290    3528 out.go:179] * Verifying Kubernetes components...
	I1210 05:57:25.732528    3528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:57:25.733215    3528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:57:25.733847    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:25.784477    3528 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:25.784477    3528 kapi.go:59] client config for functional-871500: &rest.Config{Host:"https://127.0.0.1:50086", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff61ff39080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 05:57:25.785479    3528 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 05:57:25.785479    3528 addons.go:239] Setting addon default-storageclass=true in "functional-871500"
	I1210 05:57:25.785479    3528 host.go:66] Checking if "functional-871500" exists ...
	I1210 05:57:25.792481    3528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:57:25.809483    3528 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:57:25.812486    3528 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:25.812486    3528 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 05:57:25.815477    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:25.843475    3528 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:25.843475    3528 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 05:57:25.846475    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:25.863476    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:25.889481    3528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:57:25.893492    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:25.997793    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:26.023732    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:26.053186    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:26.112921    3528 node_ready.go:35] waiting up to 6m0s for node "functional-871500" to be "Ready" ...
	I1210 05:57:26.112921    3528 type.go:168] "Request Body" body=""
	I1210 05:57:26.113457    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:26.116638    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:26.133091    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.136407    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.136407    3528 retry.go:31] will retry after 345.217772ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.150366    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.202827    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.202827    3528 retry.go:31] will retry after 151.034764ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.359087    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:26.431671    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.436291    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.436291    3528 retry.go:31] will retry after 206.058838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.486383    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:26.557721    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.560620    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.560620    3528 retry.go:31] will retry after 499.995799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.648783    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:26.718122    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.721048    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.721048    3528 retry.go:31] will retry after 393.754282ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.063815    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:27.116921    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:27.116921    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:27.119587    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:27.119858    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:27.142617    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:27.145831    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.145969    3528 retry.go:31] will retry after 468.483229ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.204933    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:27.208432    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.208432    3528 retry.go:31] will retry after 855.193396ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.619421    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:27.706849    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:27.710739    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.710739    3528 retry.go:31] will retry after 912.738336ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:28.069754    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:28.120644    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:28.120644    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:28.123531    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:28.143254    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:28.148927    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:28.148927    3528 retry.go:31] will retry after 983.332816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:28.628567    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:28.701176    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:28.706795    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:28.706795    3528 retry.go:31] will retry after 1.385287928s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:29.123599    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:29.123599    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:29.126305    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:29.136958    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:29.206724    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:29.211387    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:29.211387    3528 retry.go:31] will retry after 1.736840395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:30.096718    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:30.126845    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:30.126845    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:30.129697    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:30.181502    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:30.186062    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:30.186111    3528 retry.go:31] will retry after 1.361370091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:30.954728    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:31.028355    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:31.034556    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:31.034556    3528 retry.go:31] will retry after 1.491617713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:31.130593    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:31.130593    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:31.133462    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:31.553535    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:31.628770    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:31.634748    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:31.634748    3528 retry.go:31] will retry after 3.561022392s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:32.134739    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:32.134739    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:32.138071    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:32.531847    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:32.611685    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:32.617246    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:32.617246    3528 retry.go:31] will retry after 5.95380248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:33.138488    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:33.138875    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:33.141787    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:34.142311    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:34.142734    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:34.145176    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:35.146145    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:35.146145    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:35.148924    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:35.201546    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:35.276874    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:35.281183    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:35.281183    3528 retry.go:31] will retry after 3.730531418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:36.149846    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:36.149846    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:36.152788    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 05:57:36.152788    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:57:36.152788    3528 type.go:168] "Request Body" body=""
	I1210 05:57:36.152788    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:36.155425    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:37.155901    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:37.155901    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:37.159513    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:38.161109    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:38.161109    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:38.164724    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:38.577263    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:38.649489    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:38.652783    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:38.652883    3528 retry.go:31] will retry after 3.457172569s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:39.016926    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:39.102009    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:39.106825    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:39.106825    3528 retry.go:31] will retry after 7.958311304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:39.165052    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:39.165052    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:39.167612    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:40.168385    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:40.168385    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:40.171568    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:41.172124    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:41.172124    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:41.175998    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:42.114835    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:42.176733    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:42.176733    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:42.179377    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:42.194232    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:42.198994    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:42.198994    3528 retry.go:31] will retry after 11.400414998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:43.179774    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:43.179774    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:43.182962    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:44.183364    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:44.183364    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:44.186385    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:45.186936    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:45.187376    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:45.189591    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:46.190096    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:46.190096    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:46.196158    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	W1210 05:57:46.196158    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:57:46.196158    3528 type.go:168] "Request Body" body=""
	I1210 05:57:46.196158    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:46.198622    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:47.071512    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:47.150023    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:47.153571    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:47.153571    3528 retry.go:31] will retry after 8.685329621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:47.199356    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:47.199356    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:47.202855    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:48.203136    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:48.203136    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:48.209086    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1210 05:57:49.209940    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:49.209940    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:49.213512    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:50.214412    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:50.214412    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:50.218493    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:57:51.219009    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:51.219009    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:51.221689    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:52.221931    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:52.221931    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:52.224876    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:53.225848    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:53.225848    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:53.229481    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:53.604916    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:53.684553    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:53.688941    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:53.688941    3528 retry.go:31] will retry after 15.037235136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:54.230291    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:54.230291    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:54.233031    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:55.233749    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:55.233749    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:55.236864    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:55.845563    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:55.917684    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:55.920989    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:55.920989    3528 retry.go:31] will retry after 14.528574699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:56.237162    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:56.237162    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:56.240358    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:57:56.240358    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:57:56.240358    3528 type.go:168] "Request Body" body=""
	I1210 05:57:56.240358    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:56.242693    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:57.243108    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:57.243108    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:57.246459    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:58.247768    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:58.248150    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:58.251587    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:59.252608    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:59.252608    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:59.255751    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:00.256340    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:00.256340    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:00.259424    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:01.260417    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:01.260417    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:01.263835    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:02.264658    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:02.264976    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:02.268894    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:03.269646    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:03.270040    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:03.272742    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:04.273295    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:04.273295    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:04.276636    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:05.277239    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:05.277639    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:05.280629    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:06.281483    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:06.281483    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:06.285745    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1210 05:58:06.285802    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:06.285840    3528 type.go:168] "Request Body" body=""
	I1210 05:58:06.285987    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:06.288564    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:07.289127    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:07.289127    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:07.292563    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:08.293072    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:08.293072    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:08.297241    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:58:08.732392    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:58:08.811298    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:08.814895    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:08.814895    3528 retry.go:31] will retry after 24.059893548s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:09.297667    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:09.297667    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:09.300824    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:10.301402    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:10.301402    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:10.304411    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:10.455124    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:58:10.546239    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:10.546239    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:10.546239    3528 retry.go:31] will retry after 31.876597574s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:11.304978    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:11.304978    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:11.308149    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:12.308734    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:12.308734    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:12.311812    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:13.312561    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:13.313241    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:13.316204    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:14.317485    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:14.317883    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:14.320038    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:15.320460    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:15.320460    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:15.323420    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:16.323723    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:16.323723    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:16.326977    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:16.326977    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:16.327139    3528 type.go:168] "Request Body" body=""
	I1210 05:58:16.327227    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:16.329681    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:17.330932    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:17.330932    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:17.333882    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:18.334334    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:18.334798    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:18.338144    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:19.338534    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:19.338534    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:19.342989    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:58:20.343612    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:20.343612    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:20.346805    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:21.347681    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:21.347681    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:21.350863    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:22.351290    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:22.351290    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:22.354536    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:23.355239    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:23.355239    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:23.358499    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:24.359467    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:24.359467    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:24.364653    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1210 05:58:25.365025    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:25.365025    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:25.368433    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:26.369056    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:26.369056    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:26.372426    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:26.372457    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:26.372457    3528 type.go:168] "Request Body" body=""
	I1210 05:58:26.372457    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:26.374640    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:27.375624    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:27.375624    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:27.379448    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:28.380744    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:28.380744    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:28.384412    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:29.385100    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:29.385455    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:29.388161    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:30.388490    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:30.388490    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:30.391842    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:31.392294    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:31.392294    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:31.395842    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:32.397016    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:32.397016    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:32.399019    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:32.881902    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:58:32.967281    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:32.972519    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:32.972519    3528 retry.go:31] will retry after 41.610684516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:33.399525    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:33.399525    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:33.402804    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:34.403496    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:34.403496    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:34.406699    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:35.406992    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:35.406992    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:35.410007    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:36.410696    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:36.410696    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:36.414578    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:36.414673    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:36.414815    3528 type.go:168] "Request Body" body=""
	I1210 05:58:36.414864    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:36.417495    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:37.417917    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:37.418702    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:37.421367    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:38.421905    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:38.421905    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:38.424630    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:39.425767    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:39.426355    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:39.429576    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:40.429801    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:40.429801    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:40.433301    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:41.433959    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:41.433959    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:41.437621    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:42.429097    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:58:42.438217    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:42.438429    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:42.440917    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:42.509794    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:42.514955    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:42.515232    3528 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 05:58:43.441740    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:43.441740    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:43.444947    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:44.445672    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:44.445672    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:44.449361    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:45.449616    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:45.450071    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:45.452940    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:46.454145    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:46.454503    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:46.458078    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:46.458078    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:46.458078    3528 type.go:168] "Request Body" body=""
	I1210 05:58:46.458078    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:46.460173    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:47.460277    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:47.460277    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:47.462994    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:48.463438    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:48.463438    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:48.466303    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:49.467359    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:49.467359    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:49.471033    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:50.471353    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:50.471932    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:50.474800    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:51.475228    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:51.475228    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:51.478898    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:52.479596    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:52.479596    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:52.483072    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:53.483188    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:53.483188    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:53.486888    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:54.487194    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:54.487194    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:54.489701    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:55.490295    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:55.490295    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:55.494381    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:58:56.495339    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:56.495339    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:56.498522    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:56.498627    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:56.498685    3528 type.go:168] "Request Body" body=""
	I1210 05:58:56.498685    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:56.501262    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:57.501619    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:57.501619    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:57.504510    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:58.504988    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:58.504988    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:58.508232    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:59.508716    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:59.508952    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:59.512196    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:00.512876    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:00.512876    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:00.516262    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:01.516926    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:01.516926    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:01.519996    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:02.520867    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:02.520867    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:02.524453    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:03.525204    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:03.525204    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:03.528417    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:04.528800    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:04.528800    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:04.532496    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:05.533449    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:05.533449    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:05.535518    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:06.535694    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:06.535694    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:06.538826    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:06.538826    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:06.538826    3528 type.go:168] "Request Body" body=""
	I1210 05:59:06.538826    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:06.541732    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:07.542096    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:07.542096    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:07.546090    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:08.546686    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:08.546686    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:08.550042    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:09.551069    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:09.551069    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:09.554772    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:10.555918    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:10.556223    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:10.558373    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:11.559619    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:11.559619    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:11.562909    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:12.563393    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:12.563393    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:12.566949    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:13.567538    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:13.567538    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:13.570615    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:14.571357    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:14.571869    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:14.574910    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:14.588699    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:59:14.659982    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:59:14.662951    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:59:14.662951    3528 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 05:59:14.667272    3528 out.go:179] * Enabled addons: 
	I1210 05:59:14.669291    3528 addons.go:530] duration metric: took 1m48.9444759s for enable addons: enabled=[]
	I1210 05:59:15.575548    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:15.575548    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:15.577957    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:16.578269    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:16.578269    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:16.581535    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:16.581626    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:16.581709    3528 type.go:168] "Request Body" body=""
	I1210 05:59:16.581757    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:16.584351    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:17.585087    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:17.585598    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:17.587811    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:18.588817    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:18.588817    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:18.593150    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:59:19.593863    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:19.593863    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:19.596290    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:20.596979    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:20.597284    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:20.600249    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:21.600500    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:21.600500    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:21.603751    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:22.603880    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:22.603880    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:22.608748    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:59:23.609127    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:23.609447    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:23.612322    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:24.613043    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:24.613043    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:24.616893    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:25.617546    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:25.617895    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:25.620726    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:26.620874    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:26.621261    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:26.624539    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:26.624539    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:26.624539    3528 type.go:168] "Request Body" body=""
	I1210 05:59:26.624539    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:26.627913    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:27.628729    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:27.628729    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:27.631708    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:28.632003    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:28.632003    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:28.635112    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:29.636254    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:29.636254    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:29.640073    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:30.640567    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:30.640567    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:30.643449    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:31.644603    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:31.644603    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:31.648321    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:32.648642    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:32.649007    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:32.651241    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:33.652555    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:33.652555    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:33.655647    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:34.656445    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:34.656445    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:34.659525    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:35.660470    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:35.660769    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:35.663511    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:36.663841    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:36.663841    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:36.667272    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:36.667368    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:36.667437    3528 type.go:168] "Request Body" body=""
	I1210 05:59:36.667551    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:36.671515    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:37.671899    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:37.671899    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:37.676101    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:59:38.676473    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:38.676473    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:38.679323    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:39.679890    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:39.679890    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:39.682898    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:40.683418    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:40.683418    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:40.687065    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:41.687380    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:41.687380    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:41.690398    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:42.691292    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:42.691292    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:42.693967    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:43.694336    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:43.694336    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:43.697547    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:44.697757    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:44.697757    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:44.700896    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:45.701213    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:45.701677    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:45.704167    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:46.704767    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:46.705237    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:46.708460    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:46.709023    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:46.709124    3528 type.go:168] "Request Body" body=""
	I1210 05:59:46.709198    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:46.711537    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:47.711814    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:47.712045    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:47.715217    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:48.716360    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:48.716360    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:48.719060    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:49.719847    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:49.719847    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:49.723779    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:50.724269    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:50.724269    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:50.728439    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:59:51.729126    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:51.729126    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:51.732791    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:52.734110    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:52.734110    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:52.738074    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:53.738271    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:53.738271    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:53.741809    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:54.742174    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:54.742174    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:54.746052    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:55.747079    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:55.747079    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:55.750285    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:56.750719    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:56.750719    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:56.753273    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 05:59:56.753273    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:56.753273    3528 type.go:168] "Request Body" body=""
	I1210 05:59:56.753273    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:56.755741    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:57.757283    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:57.757592    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:57.759856    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:58.761013    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:58.761013    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:58.764032    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:59.764386    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:59.764386    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:59.767579    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:00.767741    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:00.767741    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:00.771607    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:01.771831    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:01.771831    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:01.775356    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:02.775642    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:02.775642    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:02.779145    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:03.779411    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:03.779411    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:03.783151    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:04.783296    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:04.783296    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:04.786762    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:05.787153    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:05.787153    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:05.790518    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:06.790834    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:06.790834    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:06.794128    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:06.794128    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:06.794128    3528 type.go:168] "Request Body" body=""
	I1210 06:00:06.794660    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:06.796765    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:07.797318    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:07.797318    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:07.800177    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:08.801465    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:08.801465    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:08.804595    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:09.805061    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:09.805401    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:09.807835    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:10.808649    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:10.808991    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:10.811366    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:00:11.811812    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:11.811812    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:11.815185    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:12.815710    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:12.815710    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:12.819741    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:13.820205    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:13.820482    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:13.823243    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:14.823451    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:14.823451    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:14.826552    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:15.827102    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:15.827102    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:15.830239    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:16.830899    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:16.830899    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:16.833829    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:00:16.833829    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:16.834466    3528 type.go:168] "Request Body" body=""
	I1210 06:00:16.834489    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:16.836240    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:00:17.836565    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:17.836565    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:17.840343    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:18.840710    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:18.841040    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:18.844672    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:19.845082    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:19.845358    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:19.846852    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:00:20.848265    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:20.848265    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:20.851784    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:21.852223    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:21.852223    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:21.855023    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:22.856027    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:22.856027    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:22.859873    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:23.860923    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:23.860923    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:23.864261    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:24.864916    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:24.864916    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:24.868305    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:25.869078    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:25.869078    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:25.871509    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:26.871824    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:26.871824    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:26.875349    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:26.875349    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:26.875881    3528 type.go:168] "Request Body" body=""
	I1210 06:00:26.876020    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:26.878104    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:27.878306    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:27.878306    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:27.881296    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:28.882571    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:28.882571    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:28.885774    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:29.885982    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:29.885982    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:29.889065    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:30.889836    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:30.889836    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:30.892524    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:31.893215    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:31.893546    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:31.895912    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:32.897363    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:32.897363    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:32.900093    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:33.900778    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:33.900778    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:33.903568    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:34.904276    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:34.904276    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:34.907470    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:35.909284    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:35.909284    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:35.912316    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:36.913041    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:36.913041    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:36.916114    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:36.916643    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:36.916788    3528 type.go:168] "Request Body" body=""
	I1210 06:00:36.916788    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:36.918746    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:00:37.918978    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:37.918978    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:37.922454    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:38.922847    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:38.923075    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:38.926196    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:39.926491    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:39.926491    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:39.929932    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:40.930368    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:40.930368    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:40.934200    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:41.934738    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:41.934738    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:41.938126    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:42.939113    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:42.939113    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:42.941791    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:43.941991    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:43.942311    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:43.945177    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:44.945677    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:44.945677    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:44.949097    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:45.949865    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:45.950099    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:45.953257    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:46.953679    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:46.953679    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:46.957085    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:46.957085    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:46.957085    3528 type.go:168] "Request Body" body=""
	I1210 06:00:46.957085    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:46.959580    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:47.960148    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:47.960373    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:47.963579    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:48.964463    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:48.964463    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:48.967395    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:49.967782    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:49.967782    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:49.970748    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:50.971566    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:50.971566    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:50.974845    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:51.975483    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:51.976062    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:51.980347    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:00:52.980545    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:52.980545    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:52.983731    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:53.984073    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:53.984391    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:53.987349    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:54.988244    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:54.988244    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:54.992602    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:00:55.993170    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:55.993170    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:55.996092    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:56.996214    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:56.996214    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:56.999523    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:56.999523    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:57.000058    3528 type.go:168] "Request Body" body=""
	I1210 06:00:57.000148    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:57.002201    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:58.003156    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:58.003156    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:58.005615    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:59.006304    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:59.006304    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:59.009503    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:00.010519    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:00.010519    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:00.013059    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:01.013184    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:01.013184    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:01.017608    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:02.018033    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:02.018033    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:02.021448    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:03.022254    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:03.022604    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:03.025475    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:04.026637    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:04.026637    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:04.029792    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:05.030057    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:05.030057    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:05.033922    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:06.034438    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:06.034438    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:06.037480    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:07.038283    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:07.038283    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:07.041280    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:01:07.041328    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:07.041532    3528 type.go:168] "Request Body" body=""
	I1210 06:01:07.041606    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:07.044121    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:08.044522    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:08.044522    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:08.048047    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:09.048331    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:09.048331    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:09.051118    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:10.051651    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:10.051929    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:10.054948    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:11.055145    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:11.055564    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:11.058295    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:12.059200    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:12.059345    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:12.061763    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:13.062357    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:13.062357    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:13.067157    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:14.068007    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:14.068443    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:14.071405    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:15.071610    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:15.071610    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:15.075149    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:16.075929    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:16.075929    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:16.078363    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:17.078629    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:17.078629    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:17.082263    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:01:17.082399    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:17.082476    3528 type.go:168] "Request Body" body=""
	I1210 06:01:17.082601    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:17.084577    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:01:18.085283    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:18.085283    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:18.087761    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:19.089284    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:19.089284    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:19.093369    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:20.094032    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:20.094032    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:20.097108    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:21.097562    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:21.097562    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:21.104228    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	I1210 06:01:22.104512    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:22.104512    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:22.106967    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:23.107603    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:23.107603    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:23.110798    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:24.111778    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:24.111778    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:24.114416    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:25.115471    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:25.115471    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:25.118129    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:26.118485    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:26.118485    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:26.121278    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:27.121884    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:27.121884    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:27.125182    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:01:27.125182    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:27.125182    3528 type.go:168] "Request Body" body=""
	I1210 06:01:27.125182    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:27.127600    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:28.128000    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:28.128000    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:28.131773    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:29.132042    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:29.132453    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:29.135795    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:30.136052    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:30.136052    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:30.140250    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:31.140497    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:31.140975    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:31.143389    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:32.143469    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:32.144131    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:32.148568    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:33.148831    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:33.148831    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:33.152129    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:34.152786    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:34.152786    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:34.156156    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:35.156429    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:35.156429    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:35.159806    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:36.160061    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:36.160061    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:36.163126    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:37.163591    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:37.163591    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:37.166938    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:01:37.166938    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:37.167463    3528 type.go:168] "Request Body" body=""
	I1210 06:01:37.167518    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:37.169655    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:38.169997    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:38.169997    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:38.173075    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:39.173563    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:39.173563    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:39.177056    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:40.177923    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:40.177923    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:40.181566    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:41.182378    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:41.182378    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:41.185302    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:42.185967    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:42.185967    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:42.188700    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:43.189505    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:43.189505    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:43.192705    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:44.193063    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:44.193560    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:44.195918    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:45.196717    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:45.196717    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:45.200077    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:46.200329    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:46.200329    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:46.203250    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:47.204114    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:47.204114    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:47.206151    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:01:47.206151    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:47.206151    3528 type.go:168] "Request Body" body=""
	I1210 06:01:47.206692    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:47.209053    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:48.209387    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:48.209387    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:48.213313    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:49.213608    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:49.213608    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:49.217045    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:50.217195    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:50.217195    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:50.220141    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:51.220422    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:51.220422    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:51.223771    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:52.224601    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:52.224601    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:52.227794    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:53.228750    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:53.228750    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:53.231412    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:54.232114    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:54.232114    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:54.235027    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:55.235579    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:55.235983    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:55.238624    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:56.239321    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:56.239321    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:56.241809    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:57.242257    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:57.242257    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:57.245969    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:01:57.245969    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:57.245969    3528 type.go:168] "Request Body" body=""
	I1210 06:01:57.245969    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:57.248410    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:01:58.249059    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:58.249059    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:58.252337    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:59.252782    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:59.253339    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:59.255908    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:00.256663    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:00.257161    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:00.259603    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:01.260700    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:01.260700    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:01.263908    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:02.263994    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:02.264404    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:02.267730    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:03.268305    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:03.268305    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:03.271419    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:04.271604    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:04.271604    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:04.274704    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:05.275664    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:05.275664    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:05.278947    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:06.280127    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:06.280127    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:06.283728    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:07.284100    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:07.284100    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:07.286782    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:07.286782    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:07.287315    3528 type.go:168] "Request Body" body=""
	I1210 06:02:07.287315    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:07.289712    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:08.290003    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:08.290003    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:08.293335    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:09.293835    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:09.293835    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:09.296504    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:10.296683    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:10.296683    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:10.299600    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:11.300202    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:11.300202    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:11.303557    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:12.305092    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:12.305092    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:12.307542    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:13.308588    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:13.308588    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:13.312484    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:14.312766    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:14.312766    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:14.316277    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:15.317454    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:15.317454    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:15.320383    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:16.320913    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:16.320913    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:16.323576    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:17.323813    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:17.323813    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:17.326985    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:02:17.326985    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:17.326985    3528 type.go:168] "Request Body" body=""
	I1210 06:02:17.326985    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:17.329581    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:18.330187    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:18.330187    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:18.332737    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:19.333031    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:19.333031    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:19.335030    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:02:20.336555    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:20.336555    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:20.339555    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:21.340558    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:21.340558    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:21.342929    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:22.343239    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:22.343724    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:22.346810    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:23.347387    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:23.347387    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:23.350241    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:24.350796    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:24.350796    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:24.353724    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:25.354434    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:25.354772    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:25.357575    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:26.358016    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:26.358016    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:26.361246    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:27.362131    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:27.362479    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:27.365230    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:27.365813    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:27.365813    3528 type.go:168] "Request Body" body=""
	I1210 06:02:27.365813    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:27.368828    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:28.369580    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:28.369580    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:28.372320    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:29.372897    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:29.372897    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:29.376660    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:30.377760    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:30.377760    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:30.380415    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:31.381897    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:31.381897    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:31.385100    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:32.385291    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:32.385291    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:32.387374    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:33.389360    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:33.389360    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:33.393116    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:34.393502    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:34.393502    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:34.396152    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:35.396913    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:35.396913    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:35.401573    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:02:36.402190    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:36.402534    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:36.404711    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:37.405859    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:37.405859    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:37.408704    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:37.408838    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:37.408985    3528 type.go:168] "Request Body" body=""
	I1210 06:02:37.409077    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:37.412442    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:38.413079    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:38.413079    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:38.416332    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:39.416603    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:39.416603    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:39.420060    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:40.420482    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:40.420482    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:40.424152    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:41.424439    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:41.424439    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:41.427960    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:42.428547    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:42.428547    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:42.433716    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1210 06:02:43.434760    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:43.434760    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:43.437305    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:44.437929    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:44.437929    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:44.441911    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:45.442598    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:45.442598    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:45.445386    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:46.445563    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:46.445958    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:46.449188    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:47.450213    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:47.450868    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:47.453841    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:47.453841    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:47.453841    3528 type.go:168] "Request Body" body=""
	I1210 06:02:47.453841    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:47.457634    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:48.457929    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:48.457929    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:48.461148    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:49.461572    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:49.461572    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:49.464368    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:50.465569    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:50.465956    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:50.468785    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:51.469079    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:51.469079    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:51.473246    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:02:52.473693    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:52.473693    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:52.477423    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:53.477937    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:53.477937    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:53.481938    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:02:54.482839    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:54.482839    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:54.485813    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:55.486892    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:55.486892    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:55.490131    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:56.490554    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:56.490554    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:56.493887    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:57.494861    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:57.494861    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:57.497800    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:57.497800    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:57.497998    3528 type.go:168] "Request Body" body=""
	I1210 06:02:57.498076    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:57.500781    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:58.501021    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:58.501021    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:58.504136    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:59.504488    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:59.504969    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:59.507730    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:00.508009    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:00.508009    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:00.511476    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:01.512344    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:01.512344    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:01.515549    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:02.516467    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:02.516467    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:02.520405    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:03.520921    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:03.521256    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:03.524252    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:04.524513    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:04.524953    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:04.527628    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:05.529050    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:05.529050    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:05.536803    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=7
	I1210 06:03:06.537822    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:06.537822    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:06.541195    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:07.541552    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:07.541552    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:07.544874    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:03:07.544874    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:03:07.544874    3528 type.go:168] "Request Body" body=""
	I1210 06:03:07.544874    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:07.548078    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:08.548780    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:08.548969    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:08.551745    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:09.552670    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:09.552670    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:09.556239    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:10.556550    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:10.556906    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:10.559896    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:11.560632    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:11.560632    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:11.563477    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:12.564335    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:12.564335    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:12.567101    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:13.567254    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:13.567254    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:13.570684    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:14.571214    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:14.571214    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:14.573567    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:15.574056    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:15.574401    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:15.577034    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:16.577296    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:16.577296    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:16.580507    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:17.580670    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:17.580670    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:17.584345    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:03:17.584442    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:03:17.584620    3528 type.go:168] "Request Body" body=""
	I1210 06:03:17.584714    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:17.586766    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:18.587485    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:18.587485    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:18.590661    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:19.591695    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:19.592099    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:19.594643    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:20.595361    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:20.595361    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:20.597940    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:21.598595    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:21.598595    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:21.601244    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:22.601730    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:22.601730    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:22.604442    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:23.605664    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:23.605664    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:23.608404    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:24.609206    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:24.609206    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:24.612484    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:25.613066    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:25.613066    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:25.615998    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:03:26.117891    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1210 06:03:26.117891    3528 node_ready.go:38] duration metric: took 6m0.0004685s for node "functional-871500" to be "Ready" ...
	I1210 06:03:26.123026    3528 out.go:203] 
	W1210 06:03:26.125419    3528 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 06:03:26.125419    3528 out.go:285] * 
	* 
	W1210 06:03:26.127475    3528 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:03:26.130878    3528 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-windows-amd64.exe start -p functional-871500 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m10.0961937s for "functional-871500" cluster.
I1210 06:03:26.905407   11304 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-871500
helpers_test.go:244: (dbg) docker inspect functional-871500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8",
	        "Created": "2025-12-10T05:48:42.330122465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43983,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:48:42.614396787Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hosts",
	        "LogPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8-json.log",
	        "Name": "/functional-871500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-871500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-871500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-871500",
	                "Source": "/var/lib/docker/volumes/functional-871500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-871500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-871500",
	                "name.minikube.sigs.k8s.io": "functional-871500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f831997cbc8b0c45b2bf0420afea78f7ce282bcb79cb347c18a4cbe1cbe8de10",
	            "SandboxKey": "/var/run/docker/netns/f831997cbc8b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50082"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50083"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50085"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-871500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e1a843817431186dff999677a77b90e1cb216c0695cd37f7ec217b50cc77b815",
	                    "EndpointID": "c10c5ce0711796e291e0a729879c86799e1ac37c35b14f1d57f23910383e1c22",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-871500",
	                        "47edd36e1aff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500: exit status 2 (699.4735ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 logs -n 25: (1.6800484s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ docker-env     │ functional-493600 docker-env                                                                                                                              │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls                                                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image save kicbase/echo-server:functional-493600 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image rm kicbase/echo-server:functional-493600 --alsologtostderr                                                                        │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls                                                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ service        │ functional-493600 service hello-node --url --format={{.IP}}                                                                                               │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ image          │ functional-493600 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ docker-env     │ functional-493600 docker-env                                                                                                                              │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls                                                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ ssh            │ functional-493600 ssh sudo cat /etc/test/nested/copy/11304/hosts                                                                                          │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image save --daemon kicbase/echo-server:functional-493600 --alsologtostderr                                                             │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ update-context │ functional-493600 update-context --alsologtostderr -v=2                                                                                                   │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ update-context │ functional-493600 update-context --alsologtostderr -v=2                                                                                                   │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ update-context │ functional-493600 update-context --alsologtostderr -v=2                                                                                                   │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls --format short --alsologtostderr                                                                                               │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls --format yaml --alsologtostderr                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ ssh            │ functional-493600 ssh pgrep buildkitd                                                                                                                     │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ image          │ functional-493600 image build -t localhost/my-image:functional-493600 testdata\build --alsologtostderr                                                    │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls                                                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls --format json --alsologtostderr                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ service        │ functional-493600 service hello-node --url                                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ image          │ functional-493600 image ls --format table --alsologtostderr                                                                                               │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ delete         │ -p functional-493600                                                                                                                                      │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:48 UTC │ 10 Dec 25 05:48 UTC │
	│ start          │ -p functional-871500 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-rc.1                                     │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:48 UTC │                     │
	│ start          │ -p functional-871500 --alsologtostderr -v=8                                                                                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:57 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:57:16
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:57:16.875847    3528 out.go:360] Setting OutFile to fd 1624 ...
	I1210 05:57:16.917657    3528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:57:16.917657    3528 out.go:374] Setting ErrFile to fd 1612...
	I1210 05:57:16.917657    3528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:57:16.932616    3528 out.go:368] Setting JSON to false
	I1210 05:57:16.934770    3528 start.go:133] hostinfo: {"hostname":"minikube4","uptime":5168,"bootTime":1765341068,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 05:57:16.934770    3528 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 05:57:16.939605    3528 out.go:179] * [functional-871500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 05:57:16.942014    3528 notify.go:221] Checking for updates...
	I1210 05:57:16.946622    3528 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:16.950394    3528 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:57:16.952350    3528 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 05:57:16.955212    3528 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:57:16.957439    3528 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:57:16.962034    3528 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 05:57:16.962229    3528 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:57:17.077929    3528 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 05:57:17.082453    3528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:57:17.310960    3528 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-10 05:57:17.287646185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:57:17.314972    3528 out.go:179] * Using the docker driver based on existing profile
	I1210 05:57:17.316973    3528 start.go:309] selected driver: docker
	I1210 05:57:17.316973    3528 start.go:927] validating driver "docker" against &{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:57:17.316973    3528 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:57:17.322956    3528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:57:17.562979    3528 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-10 05:57:17.536373793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:57:17.650233    3528 cni.go:84] Creating CNI manager for ""
	I1210 05:57:17.650233    3528 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 05:57:17.650860    3528 start.go:353] cluster config:
	{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:57:17.654219    3528 out.go:179] * Starting "functional-871500" primary control-plane node in "functional-871500" cluster
	I1210 05:57:17.656244    3528 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 05:57:17.659128    3528 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:57:17.661459    3528 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:57:17.661459    3528 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 05:57:17.661583    3528 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1210 05:57:17.661583    3528 cache.go:65] Caching tarball of preloaded images
	I1210 05:57:17.661583    3528 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1210 05:57:17.662115    3528 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1210 05:57:17.662465    3528 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\config.json ...
	I1210 05:57:17.734611    3528 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 05:57:17.734611    3528 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 05:57:17.734611    3528 cache.go:243] Successfully downloaded all kic artifacts
	I1210 05:57:17.734611    3528 start.go:360] acquireMachinesLock for functional-871500: {Name:mkaa7072cf669cebcb93feb3e66bf80897472d33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:57:17.735277    3528 start.go:364] duration metric: took 104.4µs to acquireMachinesLock for "functional-871500"
	I1210 05:57:17.735336    3528 start.go:96] Skipping create...Using existing machine configuration
	I1210 05:57:17.735336    3528 fix.go:54] fixHost starting: 
	I1210 05:57:17.741445    3528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:57:17.794847    3528 fix.go:112] recreateIfNeeded on functional-871500: state=Running err=<nil>
	W1210 05:57:17.794847    3528 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 05:57:17.798233    3528 out.go:252] * Updating the running docker "functional-871500" container ...
	I1210 05:57:17.798233    3528 machine.go:94] provisionDockerMachine start ...
	I1210 05:57:17.802052    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:17.859397    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:17.860025    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:17.860025    3528 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:57:18.039007    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-871500
	
	I1210 05:57:18.039007    3528 ubuntu.go:182] provisioning hostname "functional-871500"
	I1210 05:57:18.043768    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:18.100666    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:18.100666    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:18.100666    3528 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-871500 && echo "functional-871500" | sudo tee /etc/hostname
	I1210 05:57:18.283797    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-871500
	
	I1210 05:57:18.287904    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:18.342863    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:18.343348    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:18.343409    3528 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-871500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-871500/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-871500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:57:18.533020    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:57:18.533020    3528 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 05:57:18.533020    3528 ubuntu.go:190] setting up certificates
	I1210 05:57:18.533020    3528 provision.go:84] configureAuth start
	I1210 05:57:18.537250    3528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 05:57:18.595140    3528 provision.go:143] copyHostCerts
	I1210 05:57:18.595839    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1210 05:57:18.596031    3528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 05:57:18.596062    3528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 05:57:18.596239    3528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 05:57:18.596845    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1210 05:57:18.597366    3528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 05:57:18.597406    3528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 05:57:18.597495    3528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 05:57:18.598291    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1210 05:57:18.598291    3528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 05:57:18.598291    3528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 05:57:18.598291    3528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 05:57:18.599093    3528 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-871500 san=[127.0.0.1 192.168.49.2 functional-871500 localhost minikube]
	I1210 05:57:18.702479    3528 provision.go:177] copyRemoteCerts
	I1210 05:57:18.706176    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:57:18.709177    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:18.761464    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:18.886181    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1210 05:57:18.886181    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 05:57:18.914027    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1210 05:57:18.914027    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 05:57:18.939266    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1210 05:57:18.939794    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 05:57:18.968597    3528 provision.go:87] duration metric: took 435.5446ms to configureAuth
	I1210 05:57:18.968633    3528 ubuntu.go:206] setting minikube options for container-runtime
	I1210 05:57:18.969064    3528 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 05:57:18.972714    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:19.026843    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:19.027475    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:19.027475    3528 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 05:57:19.213570    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 05:57:19.213570    3528 ubuntu.go:71] root file system type: overlay
	I1210 05:57:19.213570    3528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 05:57:19.217470    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:19.271762    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:19.271762    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:19.271762    3528 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 05:57:19.465304    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 05:57:19.469988    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:19.524496    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:19.525153    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:19.525153    3528 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 05:57:19.708281    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:57:19.708281    3528 machine.go:97] duration metric: took 1.9100246s to provisionDockerMachine
	I1210 05:57:19.708281    3528 start.go:293] postStartSetup for "functional-871500" (driver="docker")
	I1210 05:57:19.708281    3528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:57:19.712864    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:57:19.716356    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:19.769263    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:19.910607    3528 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:57:19.918702    3528 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 05:57:19.918702    3528 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 05:57:19.918702    3528 command_runner.go:130] > VERSION_ID="12"
	I1210 05:57:19.918702    3528 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 05:57:19.918702    3528 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 05:57:19.918702    3528 command_runner.go:130] > ID=debian
	I1210 05:57:19.918702    3528 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 05:57:19.918702    3528 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 05:57:19.918702    3528 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 05:57:19.918927    3528 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 05:57:19.919018    3528 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 05:57:19.919060    3528 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 05:57:19.919569    3528 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 05:57:19.919739    3528 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 05:57:19.919739    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> /etc/ssl/certs/113042.pem
	I1210 05:57:19.921060    3528 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts -> hosts in /etc/test/nested/copy/11304
	I1210 05:57:19.921102    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts -> /etc/test/nested/copy/11304/hosts
	I1210 05:57:19.926330    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11304
	I1210 05:57:19.937995    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 05:57:19.967462    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts --> /etc/test/nested/copy/11304/hosts (40 bytes)
	I1210 05:57:19.996671    3528 start.go:296] duration metric: took 288.3864ms for postStartSetup
	I1210 05:57:20.001220    3528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:57:20.004094    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:20.057975    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:20.183984    3528 command_runner.go:130] > 1%
	I1210 05:57:20.188612    3528 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 05:57:20.199532    3528 command_runner.go:130] > 950G
	I1210 05:57:20.200170    3528 fix.go:56] duration metric: took 2.4648044s for fixHost
	I1210 05:57:20.200170    3528 start.go:83] releasing machines lock for "functional-871500", held for 2.4648316s
	I1210 05:57:20.204329    3528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 05:57:20.260852    3528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 05:57:20.265678    3528 ssh_runner.go:195] Run: cat /version.json
	I1210 05:57:20.265678    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:20.268055    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:20.318377    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:20.318938    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:20.440815    3528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1210 05:57:20.440815    3528 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 05:57:20.448568    3528 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1210 05:57:20.452774    3528 ssh_runner.go:195] Run: systemctl --version
	I1210 05:57:20.464224    3528 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 05:57:20.464224    3528 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 05:57:20.469738    3528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 05:57:20.478403    3528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 05:57:20.478403    3528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:57:20.483606    3528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:57:20.495780    3528 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 05:57:20.495780    3528 start.go:496] detecting cgroup driver to use...
	I1210 05:57:20.495780    3528 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:57:20.495780    3528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:57:20.518759    3528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1210 05:57:20.523282    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 05:57:20.541393    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1210 05:57:20.546364    3528 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 05:57:20.546364    3528 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 05:57:20.557861    3528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 05:57:20.562880    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 05:57:20.580735    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:57:20.598803    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 05:57:20.615367    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:57:20.637025    3528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:57:20.656757    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 05:57:20.676589    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 05:57:20.695912    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 05:57:20.717653    3528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:57:20.732788    3528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 05:57:20.737410    3528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:57:20.756411    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:20.908020    3528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 05:57:21.078402    3528 start.go:496] detecting cgroup driver to use...
	I1210 05:57:21.078402    3528 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:57:21.083945    3528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 05:57:21.102632    3528 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1210 05:57:21.102632    3528 command_runner.go:130] > [Unit]
	I1210 05:57:21.102632    3528 command_runner.go:130] > Description=Docker Application Container Engine
	I1210 05:57:21.102632    3528 command_runner.go:130] > Documentation=https://docs.docker.com
	I1210 05:57:21.102632    3528 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1210 05:57:21.102632    3528 command_runner.go:130] > Wants=network-online.target containerd.service
	I1210 05:57:21.102632    3528 command_runner.go:130] > Requires=docker.socket
	I1210 05:57:21.102632    3528 command_runner.go:130] > StartLimitBurst=3
	I1210 05:57:21.102632    3528 command_runner.go:130] > StartLimitIntervalSec=60
	I1210 05:57:21.102632    3528 command_runner.go:130] > [Service]
	I1210 05:57:21.102632    3528 command_runner.go:130] > Type=notify
	I1210 05:57:21.102632    3528 command_runner.go:130] > Restart=always
	I1210 05:57:21.102632    3528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1210 05:57:21.102632    3528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1210 05:57:21.102632    3528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1210 05:57:21.102632    3528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1210 05:57:21.102632    3528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1210 05:57:21.102632    3528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1210 05:57:21.102632    3528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1210 05:57:21.102632    3528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1210 05:57:21.102632    3528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1210 05:57:21.102632    3528 command_runner.go:130] > ExecStart=
	I1210 05:57:21.103158    3528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1210 05:57:21.103158    3528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1210 05:57:21.103158    3528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1210 05:57:21.103158    3528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1210 05:57:21.103158    3528 command_runner.go:130] > LimitNOFILE=infinity
	I1210 05:57:21.103158    3528 command_runner.go:130] > LimitNPROC=infinity
	I1210 05:57:21.103378    3528 command_runner.go:130] > LimitCORE=infinity
	I1210 05:57:21.103378    3528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1210 05:57:21.103378    3528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1210 05:57:21.103378    3528 command_runner.go:130] > TasksMax=infinity
	I1210 05:57:21.103378    3528 command_runner.go:130] > TimeoutStartSec=0
	I1210 05:57:21.103378    3528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1210 05:57:21.103378    3528 command_runner.go:130] > Delegate=yes
	I1210 05:57:21.103378    3528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1210 05:57:21.103378    3528 command_runner.go:130] > KillMode=process
	I1210 05:57:21.103378    3528 command_runner.go:130] > OOMScoreAdjust=-500
	I1210 05:57:21.103378    3528 command_runner.go:130] > [Install]
	I1210 05:57:21.103378    3528 command_runner.go:130] > WantedBy=multi-user.target
	I1210 05:57:21.111084    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 05:57:21.134007    3528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 05:57:21.193270    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 05:57:21.218062    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 05:57:21.240026    3528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:57:21.262345    3528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1210 05:57:21.267460    3528 ssh_runner.go:195] Run: which cri-dockerd
	I1210 05:57:21.274915    3528 command_runner.go:130] > /usr/bin/cri-dockerd
	I1210 05:57:21.278860    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 05:57:21.290698    3528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 05:57:21.314565    3528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 05:57:21.466409    3528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 05:57:21.603844    3528 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 05:57:21.603844    3528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 05:57:21.630009    3528 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 05:57:21.650723    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:21.786633    3528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 05:57:22.595739    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:57:22.618130    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 05:57:22.639399    3528 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1210 05:57:22.666084    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 05:57:22.689760    3528 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 05:57:22.826287    3528 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 05:57:22.966482    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:23.147658    3528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 05:57:23.173945    3528 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 05:57:23.199471    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:23.338742    3528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 05:57:23.455945    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 05:57:23.474438    3528 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 05:57:23.478444    3528 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 05:57:23.486000    3528 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1210 05:57:23.486000    3528 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 05:57:23.486000    3528 command_runner.go:130] > Device: 0,112	Inode: 1768        Links: 1
	I1210 05:57:23.486000    3528 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1210 05:57:23.486000    3528 command_runner.go:130] > Access: 2025-12-10 05:57:23.343769342 +0000
	I1210 05:57:23.486000    3528 command_runner.go:130] > Modify: 2025-12-10 05:57:23.343769342 +0000
	I1210 05:57:23.486000    3528 command_runner.go:130] > Change: 2025-12-10 05:57:23.343769342 +0000
	I1210 05:57:23.486000    3528 command_runner.go:130] >  Birth: -
	I1210 05:57:23.486000    3528 start.go:564] Will wait 60s for crictl version
	I1210 05:57:23.490664    3528 ssh_runner.go:195] Run: which crictl
	I1210 05:57:23.496443    3528 command_runner.go:130] > /usr/local/bin/crictl
	I1210 05:57:23.501067    3528 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 05:57:23.549049    3528 command_runner.go:130] > Version:  0.1.0
	I1210 05:57:23.549049    3528 command_runner.go:130] > RuntimeName:  docker
	I1210 05:57:23.549049    3528 command_runner.go:130] > RuntimeVersion:  29.1.2
	I1210 05:57:23.549049    3528 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 05:57:23.549049    3528 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 05:57:23.552780    3528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 05:57:23.592051    3528 command_runner.go:130] > 29.1.2
	I1210 05:57:23.595007    3528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 05:57:23.630739    3528 command_runner.go:130] > 29.1.2
	I1210 05:57:23.635076    3528 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.2 ...
	I1210 05:57:23.638761    3528 cli_runner.go:164] Run: docker exec -t functional-871500 dig +short host.docker.internal
	I1210 05:57:23.765960    3528 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 05:57:23.770487    3528 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 05:57:23.780262    3528 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1210 05:57:23.784121    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:23.838579    3528 kubeadm.go:884] updating cluster {Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:57:23.838579    3528 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 05:57:23.841570    3528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:57:23.871575    3528 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1210 05:57:23.871575    3528 docker.go:621] Images already preloaded, skipping extraction
	I1210 05:57:23.875579    3528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:57:23.907148    3528 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1210 05:57:23.907148    3528 cache_images.go:86] Images are preloaded, skipping loading
	I1210 05:57:23.907148    3528 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1210 05:57:23.907668    3528 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-871500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:57:23.911609    3528 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 05:57:23.978720    3528 command_runner.go:130] > cgroupfs
	I1210 05:57:23.983482    3528 cni.go:84] Creating CNI manager for ""
	I1210 05:57:23.983482    3528 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 05:57:23.983482    3528 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:57:23.983482    3528 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-871500 NodeName:functional-871500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:57:23.983482    3528 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-871500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:57:23.987498    3528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 05:57:24.000182    3528 command_runner.go:130] > kubeadm
	I1210 05:57:24.000182    3528 command_runner.go:130] > kubectl
	I1210 05:57:24.000182    3528 command_runner.go:130] > kubelet
	I1210 05:57:24.000182    3528 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 05:57:24.004093    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:57:24.018408    3528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1210 05:57:24.041215    3528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 05:57:24.061272    3528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1210 05:57:24.082615    3528 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 05:57:24.095804    3528 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 05:57:24.101162    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:24.247994    3528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:57:24.548481    3528 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500 for IP: 192.168.49.2
	I1210 05:57:24.548481    3528 certs.go:195] generating shared ca certs ...
	I1210 05:57:24.549012    3528 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:57:24.549698    3528 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 05:57:24.549774    3528 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 05:57:24.549774    3528 certs.go:257] generating profile certs ...
	I1210 05:57:24.550590    3528 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\client.key
	I1210 05:57:24.550932    3528 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key.53a949a1
	I1210 05:57:24.550932    3528 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key
	I1210 05:57:24.550932    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 05:57:24.550932    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1210 05:57:24.550932    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 05:57:24.551460    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 05:57:24.551604    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 05:57:24.551764    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 05:57:24.551869    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 05:57:24.552075    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 05:57:24.552075    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 05:57:24.552075    3528 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 05:57:24.552617    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 05:57:24.552685    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 05:57:24.552685    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 05:57:24.552685    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 05:57:24.553394    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 05:57:24.553588    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.553766    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem -> /usr/share/ca-certificates/11304.pem
	I1210 05:57:24.553869    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> /usr/share/ca-certificates/113042.pem
	I1210 05:57:24.554786    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:57:24.581958    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:57:24.609312    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:57:24.634601    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 05:57:24.661713    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 05:57:24.690256    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 05:57:24.717784    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:57:24.748075    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 05:57:24.779590    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:57:24.808619    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 05:57:24.838348    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 05:57:24.862790    3528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:57:24.888297    3528 ssh_runner.go:195] Run: openssl version
	I1210 05:57:24.898078    3528 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 05:57:24.902400    3528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.918304    3528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:57:24.936062    3528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.946045    3528 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.946080    3528 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.950017    3528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.993898    3528 command_runner.go:130] > b5213941
	I1210 05:57:24.999156    3528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:57:25.016159    3528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.034260    3528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 05:57:25.053147    3528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.061814    3528 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.061814    3528 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.065786    3528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.108176    3528 command_runner.go:130] > 51391683
	I1210 05:57:25.113321    3528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 05:57:25.129918    3528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.147630    3528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 05:57:25.167521    3528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.176125    3528 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.176125    3528 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.180991    3528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.223232    3528 command_runner.go:130] > 3ec20f2e
	I1210 05:57:25.227937    3528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 05:57:25.244300    3528 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:57:25.251407    3528 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:57:25.251407    3528 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 05:57:25.251407    3528 command_runner.go:130] > Device: 8,48	Inode: 15342       Links: 1
	I1210 05:57:25.251407    3528 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 05:57:25.251407    3528 command_runner.go:130] > Access: 2025-12-10 05:53:12.664767007 +0000
	I1210 05:57:25.251407    3528 command_runner.go:130] > Modify: 2025-12-10 05:49:10.064289884 +0000
	I1210 05:57:25.251407    3528 command_runner.go:130] > Change: 2025-12-10 05:49:10.064289884 +0000
	I1210 05:57:25.251407    3528 command_runner.go:130] >  Birth: 2025-12-10 05:49:10.064289884 +0000
	I1210 05:57:25.255353    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 05:57:25.300587    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.306046    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 05:57:25.348642    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.354977    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 05:57:25.399294    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.403503    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 05:57:25.448300    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.453152    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 05:57:25.506357    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.511028    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 05:57:25.553903    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.554908    3528 kubeadm.go:401] StartCluster: {Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:57:25.558842    3528 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 05:57:25.593738    3528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:57:25.607577    3528 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 05:57:25.607628    3528 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 05:57:25.607628    3528 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 05:57:25.607628    3528 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 05:57:25.607628    3528 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 05:57:25.611091    3528 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 05:57:25.623212    3528 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:57:25.626623    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:25.680358    3528 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-871500" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:25.681186    3528 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-871500" cluster setting kubeconfig missing "functional-871500" context setting]
	I1210 05:57:25.681273    3528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:57:25.700123    3528 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:25.700864    3528 kapi.go:59] client config for functional-871500: &rest.Config{Host:"https://127.0.0.1:50086", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff61ff39080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 05:57:25.702157    3528 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 05:57:25.702219    3528 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 05:57:25.702289    3528 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 05:57:25.702331    3528 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 05:57:25.702331    3528 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 05:57:25.702331    3528 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 05:57:25.706500    3528 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 05:57:25.721533    3528 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1210 05:57:25.721533    3528 kubeadm.go:602] duration metric: took 113.9037ms to restartPrimaryControlPlane
	I1210 05:57:25.721533    3528 kubeadm.go:403] duration metric: took 166.6224ms to StartCluster
	I1210 05:57:25.721533    3528 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:57:25.721533    3528 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:25.722880    3528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:57:25.723468    3528 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 05:57:25.723468    3528 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 05:57:25.723468    3528 addons.go:70] Setting storage-provisioner=true in profile "functional-871500"
	I1210 05:57:25.723468    3528 addons.go:70] Setting default-storageclass=true in profile "functional-871500"
	I1210 05:57:25.723468    3528 addons.go:239] Setting addon storage-provisioner=true in "functional-871500"
	I1210 05:57:25.723990    3528 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-871500"
	I1210 05:57:25.723990    3528 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 05:57:25.724039    3528 host.go:66] Checking if "functional-871500" exists ...
	I1210 05:57:25.727290    3528 out.go:179] * Verifying Kubernetes components...
	I1210 05:57:25.732528    3528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:57:25.733215    3528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:57:25.733847    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:25.784477    3528 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:25.784477    3528 kapi.go:59] client config for functional-871500: &rest.Config{Host:"https://127.0.0.1:50086", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff61ff39080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 05:57:25.785479    3528 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 05:57:25.785479    3528 addons.go:239] Setting addon default-storageclass=true in "functional-871500"
	I1210 05:57:25.785479    3528 host.go:66] Checking if "functional-871500" exists ...
	I1210 05:57:25.792481    3528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:57:25.809483    3528 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:57:25.812486    3528 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:25.812486    3528 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 05:57:25.815477    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:25.843475    3528 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:25.843475    3528 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 05:57:25.846475    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:25.863476    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:25.889481    3528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:57:25.893492    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:25.997793    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:26.023732    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:26.053186    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:26.112921    3528 node_ready.go:35] waiting up to 6m0s for node "functional-871500" to be "Ready" ...
	I1210 05:57:26.112921    3528 type.go:168] "Request Body" body=""
	I1210 05:57:26.113457    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:26.116638    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:26.133091    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.136407    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.136407    3528 retry.go:31] will retry after 345.217772ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.150366    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.202827    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.202827    3528 retry.go:31] will retry after 151.034764ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.359087    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:26.431671    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.436291    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.436291    3528 retry.go:31] will retry after 206.058838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.486383    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:26.557721    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.560620    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.560620    3528 retry.go:31] will retry after 499.995799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.648783    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:26.718122    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.721048    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.721048    3528 retry.go:31] will retry after 393.754282ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.063815    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:27.116921    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:27.116921    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:27.119587    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:27.119858    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:27.142617    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:27.145831    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.145969    3528 retry.go:31] will retry after 468.483229ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.204933    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:27.208432    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.208432    3528 retry.go:31] will retry after 855.193396ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.619421    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:27.706849    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:27.710739    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.710739    3528 retry.go:31] will retry after 912.738336ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:28.069754    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:28.120644    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:28.120644    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:28.123531    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:28.143254    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:28.148927    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:28.148927    3528 retry.go:31] will retry after 983.332816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:28.628567    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:28.701176    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:28.706795    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:28.706795    3528 retry.go:31] will retry after 1.385287928s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:29.123599    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:29.123599    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:29.126305    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:29.136958    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:29.206724    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:29.211387    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:29.211387    3528 retry.go:31] will retry after 1.736840395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:30.096718    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:30.126845    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:30.126845    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:30.129697    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:30.181502    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:30.186062    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:30.186111    3528 retry.go:31] will retry after 1.361370091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:30.954728    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:31.028355    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:31.034556    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:31.034556    3528 retry.go:31] will retry after 1.491617713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:31.130593    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:31.130593    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:31.133462    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:31.553535    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:31.628770    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:31.634748    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:31.634748    3528 retry.go:31] will retry after 3.561022392s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:32.134739    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:32.134739    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:32.138071    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:32.531847    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:32.611685    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:32.617246    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:32.617246    3528 retry.go:31] will retry after 5.95380248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:33.138488    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:33.138875    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:33.141787    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:34.142311    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:34.142734    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:34.145176    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:35.146145    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:35.146145    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:35.148924    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:35.201546    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:35.276874    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:35.281183    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:35.281183    3528 retry.go:31] will retry after 3.730531418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:36.149846    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:36.149846    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:36.152788    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 05:57:36.152788    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:57:36.152788    3528 type.go:168] "Request Body" body=""
	I1210 05:57:36.152788    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:36.155425    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:37.155901    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:37.155901    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:37.159513    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:38.161109    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:38.161109    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:38.164724    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:38.577263    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:38.649489    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:38.652783    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:38.652883    3528 retry.go:31] will retry after 3.457172569s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:39.016926    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:39.102009    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:39.106825    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:39.106825    3528 retry.go:31] will retry after 7.958311304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:39.165052    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:39.165052    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:39.167612    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:40.168385    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:40.168385    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:40.171568    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:41.172124    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:41.172124    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:41.175998    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:42.114835    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:42.176733    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:42.176733    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:42.179377    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:42.194232    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:42.198994    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:42.198994    3528 retry.go:31] will retry after 11.400414998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:43.179774    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:43.179774    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:43.182962    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:44.183364    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:44.183364    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:44.186385    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:45.186936    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:45.187376    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:45.189591    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:46.190096    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:46.190096    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:46.196158    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	W1210 05:57:46.196158    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:57:46.196158    3528 type.go:168] "Request Body" body=""
	I1210 05:57:46.196158    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:46.198622    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:47.071512    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:47.150023    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:47.153571    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:47.153571    3528 retry.go:31] will retry after 8.685329621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:47.199356    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:47.199356    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:47.202855    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:48.203136    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:48.203136    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:48.209086    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1210 05:57:49.209940    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:49.209940    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:49.213512    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:50.214412    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:50.214412    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:50.218493    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:57:51.219009    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:51.219009    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:51.221689    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:52.221931    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:52.221931    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:52.224876    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:53.225848    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:53.225848    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:53.229481    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:53.604916    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:53.684553    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:53.688941    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:53.688941    3528 retry.go:31] will retry after 15.037235136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:54.230291    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:54.230291    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:54.233031    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:55.233749    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:55.233749    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:55.236864    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:55.845563    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:55.917684    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:55.920989    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:55.920989    3528 retry.go:31] will retry after 14.528574699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:56.237162    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:56.237162    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:56.240358    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:57:56.240358    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:57:56.240358    3528 type.go:168] "Request Body" body=""
	I1210 05:57:56.240358    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:56.242693    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:57.243108    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:57.243108    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:57.246459    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:58.247768    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:58.248150    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:58.251587    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:59.252608    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:59.252608    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:59.255751    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:00.256340    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:00.256340    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:00.259424    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:01.260417    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:01.260417    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:01.263835    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:02.264658    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:02.264976    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:02.268894    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:03.269646    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:03.270040    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:03.272742    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:04.273295    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:04.273295    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:04.276636    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:05.277239    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:05.277639    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:05.280629    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:06.281483    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:06.281483    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:06.285745    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1210 05:58:06.285802    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:06.285840    3528 type.go:168] "Request Body" body=""
	I1210 05:58:06.285987    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:06.288564    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:07.289127    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:07.289127    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:07.292563    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:08.293072    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:08.293072    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:08.297241    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:58:08.732392    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:58:08.811298    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:08.814895    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:08.814895    3528 retry.go:31] will retry after 24.059893548s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:09.297667    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:09.297667    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:09.300824    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:10.301402    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:10.301402    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:10.304411    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:10.455124    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:58:10.546239    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:10.546239    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:10.546239    3528 retry.go:31] will retry after 31.876597574s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:11.304978    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:11.304978    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:11.308149    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:12.308734    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:12.308734    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:12.311812    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:13.312561    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:13.313241    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:13.316204    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:14.317485    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:14.317883    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:14.320038    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:15.320460    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:15.320460    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:15.323420    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:16.323723    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:16.323723    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:16.326977    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:16.326977    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:16.327139    3528 type.go:168] "Request Body" body=""
	I1210 05:58:16.327227    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:16.329681    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:17.330932    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:17.330932    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:17.333882    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:18.334334    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:18.334798    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:18.338144    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:19.338534    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:19.338534    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:19.342989    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:58:20.343612    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:20.343612    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:20.346805    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:21.347681    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:21.347681    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:21.350863    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:22.351290    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:22.351290    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:22.354536    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:23.355239    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:23.355239    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:23.358499    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:24.359467    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:24.359467    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:24.364653    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1210 05:58:25.365025    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:25.365025    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:25.368433    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:26.369056    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:26.369056    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:26.372426    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:26.372457    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:26.372457    3528 type.go:168] "Request Body" body=""
	I1210 05:58:26.372457    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:26.374640    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:27.375624    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:27.375624    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:27.379448    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:28.380744    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:28.380744    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:28.384412    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:29.385100    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:29.385455    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:29.388161    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:30.388490    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:30.388490    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:30.391842    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:31.392294    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:31.392294    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:31.395842    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:32.397016    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:32.397016    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:32.399019    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:32.881902    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:58:32.967281    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:32.972519    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:32.972519    3528 retry.go:31] will retry after 41.610684516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:33.399525    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:33.399525    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:33.402804    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:34.403496    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:34.403496    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:34.406699    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:35.406992    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:35.406992    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:35.410007    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:36.410696    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:36.410696    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:36.414578    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:36.414673    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:36.414815    3528 type.go:168] "Request Body" body=""
	I1210 05:58:36.414864    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:36.417495    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:37.417917    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:37.418702    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:37.421367    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:38.421905    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:38.421905    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:38.424630    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:39.425767    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:39.426355    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:39.429576    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:40.429801    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:40.429801    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:40.433301    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:41.433959    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:41.433959    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:41.437621    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:42.429097    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:58:42.438217    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:42.438429    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:42.440917    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:42.509794    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:42.514955    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:42.515232    3528 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 05:58:43.441740    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:43.441740    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:43.444947    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:44.445672    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:44.445672    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:44.449361    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:45.449616    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:45.450071    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:45.452940    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:46.454145    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:46.454503    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:46.458078    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:46.458078    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:46.458078    3528 type.go:168] "Request Body" body=""
	I1210 05:58:46.458078    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:46.460173    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:47.460277    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:47.460277    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:47.462994    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:48.463438    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:48.463438    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:48.466303    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:49.467359    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:49.467359    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:49.471033    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:50.471353    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:50.471932    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:50.474800    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:51.475228    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:51.475228    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:51.478898    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:52.479596    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:52.479596    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:52.483072    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:53.483188    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:53.483188    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:53.486888    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:54.487194    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:54.487194    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:54.489701    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:55.490295    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:55.490295    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:55.494381    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:58:56.495339    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:56.495339    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:56.498522    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:56.498627    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:56.498685    3528 type.go:168] "Request Body" body=""
	I1210 05:58:56.498685    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:56.501262    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:57.501619    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:57.501619    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:57.504510    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:58.504988    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:58.504988    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:58.508232    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:59.508716    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:59.508952    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:59.512196    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:00.512876    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:00.512876    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:00.516262    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:01.516926    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:01.516926    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:01.519996    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:02.520867    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:02.520867    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:02.524453    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:03.525204    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:03.525204    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:03.528417    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:04.528800    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:04.528800    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:04.532496    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:05.533449    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:05.533449    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:05.535518    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:06.535694    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:06.535694    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:06.538826    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:06.538826    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:06.538826    3528 type.go:168] "Request Body" body=""
	I1210 05:59:06.538826    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:06.541732    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:07.542096    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:07.542096    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:07.546090    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:08.546686    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:08.546686    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:08.550042    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:09.551069    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:09.551069    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:09.554772    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:10.555918    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:10.556223    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:10.558373    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:11.559619    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:11.559619    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:11.562909    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:12.563393    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:12.563393    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:12.566949    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:13.567538    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:13.567538    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:13.570615    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:14.571357    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:14.571869    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:14.574910    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:14.588699    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:59:14.659982    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:59:14.662951    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:59:14.662951    3528 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 05:59:14.667272    3528 out.go:179] * Enabled addons: 
	I1210 05:59:14.669291    3528 addons.go:530] duration metric: took 1m48.9444759s for enable addons: enabled=[]
	I1210 05:59:15.575548    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:15.575548    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:15.577957    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:16.578269    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:16.578269    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:16.581535    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:16.581626    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:16.581709    3528 type.go:168] "Request Body" body=""
	I1210 05:59:16.581757    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:16.584351    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:17.585087    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:17.585598    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:17.587811    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:18.588817    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:18.588817    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:18.593150    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:59:19.593863    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:19.593863    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:19.596290    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:20.596979    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:20.597284    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:20.600249    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:21.600500    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:21.600500    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:21.603751    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:22.603880    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:22.603880    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:22.608748    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:59:23.609127    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:23.609447    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:23.612322    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:24.613043    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:24.613043    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:24.616893    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:25.617546    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:25.617895    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:25.620726    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:26.620874    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:26.621261    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:26.624539    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:26.624539    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:26.624539    3528 type.go:168] "Request Body" body=""
	I1210 05:59:26.624539    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:26.627913    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:27.628729    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:27.628729    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:27.631708    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:28.632003    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:28.632003    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:28.635112    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:29.636254    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:29.636254    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:29.640073    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:30.640567    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:30.640567    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:30.643449    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:31.644603    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:31.644603    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:31.648321    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:32.648642    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:32.649007    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:32.651241    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:33.652555    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:33.652555    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:33.655647    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:34.656445    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:34.656445    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:34.659525    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:35.660470    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:35.660769    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:35.663511    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:36.663841    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:36.663841    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:36.667272    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:36.667368    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:36.667437    3528 type.go:168] "Request Body" body=""
	I1210 05:59:36.667551    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:36.671515    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:37.671899    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:37.671899    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:37.676101    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:59:38.676473    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:38.676473    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:38.679323    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:39.679890    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:39.679890    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:39.682898    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:40.683418    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:40.683418    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:40.687065    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:41.687380    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:41.687380    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:41.690398    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:42.691292    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:42.691292    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:42.693967    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:43.694336    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:43.694336    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:43.697547    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:44.697757    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:44.697757    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:44.700896    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:45.701213    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:45.701677    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:45.704167    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:46.704767    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:46.705237    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:46.708460    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:46.709023    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:46.709124    3528 type.go:168] "Request Body" body=""
	I1210 05:59:46.709198    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:46.711537    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:47.711814    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:47.712045    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:47.715217    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:48.716360    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:48.716360    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:48.719060    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:49.719847    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:49.719847    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:49.723779    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:50.724269    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:50.724269    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:50.728439    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:59:51.729126    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:51.729126    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:51.732791    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:52.734110    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:52.734110    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:52.738074    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:53.738271    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:53.738271    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:53.741809    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:54.742174    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:54.742174    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:54.746052    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:55.747079    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:55.747079    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:55.750285    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:56.750719    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:56.750719    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:56.753273    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 05:59:56.753273    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:56.753273    3528 type.go:168] "Request Body" body=""
	I1210 05:59:56.753273    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:56.755741    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:57.757283    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:57.757592    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:57.759856    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:58.761013    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:58.761013    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:58.764032    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:59.764386    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:59.764386    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:59.767579    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:00.767741    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:00.767741    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:00.771607    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:01.771831    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:01.771831    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:01.775356    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:02.775642    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:02.775642    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:02.779145    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:03.779411    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:03.779411    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:03.783151    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:04.783296    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:04.783296    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:04.786762    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:05.787153    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:05.787153    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:05.790518    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:06.790834    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:06.790834    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:06.794128    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:06.794128    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:06.794128    3528 type.go:168] "Request Body" body=""
	I1210 06:00:06.794660    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:06.796765    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:07.797318    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:07.797318    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:07.800177    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:08.801465    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:08.801465    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:08.804595    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:09.805061    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:09.805401    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:09.807835    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:10.808649    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:10.808991    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:10.811366    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:00:11.811812    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:11.811812    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:11.815185    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:12.815710    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:12.815710    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:12.819741    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:13.820205    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:13.820482    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:13.823243    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:14.823451    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:14.823451    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:14.826552    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:15.827102    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:15.827102    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:15.830239    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:16.830899    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:16.830899    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:16.833829    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:00:16.833829    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:16.834466    3528 type.go:168] "Request Body" body=""
	I1210 06:00:16.834489    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:16.836240    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:00:17.836565    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:17.836565    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:17.840343    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:18.840710    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:18.841040    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:18.844672    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:19.845082    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:19.845358    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:19.846852    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:00:20.848265    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:20.848265    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:20.851784    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:21.852223    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:21.852223    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:21.855023    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:22.856027    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:22.856027    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:22.859873    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:23.860923    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:23.860923    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:23.864261    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:24.864916    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:24.864916    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:24.868305    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:25.869078    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:25.869078    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:25.871509    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:26.871824    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:26.871824    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:26.875349    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:26.875349    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:26.875881    3528 type.go:168] "Request Body" body=""
	I1210 06:00:26.876020    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:26.878104    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:27.878306    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:27.878306    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:27.881296    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:28.882571    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:28.882571    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:28.885774    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:29.885982    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:29.885982    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:29.889065    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:30.889836    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:30.889836    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:30.892524    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:31.893215    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:31.893546    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:31.895912    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:32.897363    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:32.897363    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:32.900093    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:33.900778    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:33.900778    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:33.903568    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:34.904276    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:34.904276    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:34.907470    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:35.909284    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:35.909284    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:35.912316    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:36.913041    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:36.913041    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:36.916114    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:36.916643    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:36.916788    3528 type.go:168] "Request Body" body=""
	I1210 06:00:36.916788    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:36.918746    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:00:37.918978    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:37.918978    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:37.922454    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:38.922847    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:38.923075    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:38.926196    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:39.926491    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:39.926491    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:39.929932    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:40.930368    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:40.930368    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:40.934200    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:41.934738    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:41.934738    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:41.938126    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:42.939113    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:42.939113    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:42.941791    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:43.941991    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:43.942311    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:43.945177    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:44.945677    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:44.945677    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:44.949097    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:45.949865    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:45.950099    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:45.953257    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:46.953679    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:46.953679    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:46.957085    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:46.957085    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:46.957085    3528 type.go:168] "Request Body" body=""
	I1210 06:00:46.957085    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:46.959580    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:47.960148    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:47.960373    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:47.963579    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:48.964463    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:48.964463    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:48.967395    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:49.967782    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:49.967782    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:49.970748    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:50.971566    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:50.971566    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:50.974845    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:51.975483    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:51.976062    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:51.980347    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:00:52.980545    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:52.980545    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:52.983731    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:53.984073    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:53.984391    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:53.987349    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:54.988244    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:54.988244    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:54.992602    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:00:55.993170    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:55.993170    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:55.996092    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:56.996214    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:56.996214    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:56.999523    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:56.999523    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:57.000058    3528 type.go:168] "Request Body" body=""
	I1210 06:00:57.000148    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:57.002201    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:58.003156    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:58.003156    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:58.005615    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:59.006304    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:59.006304    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:59.009503    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:00.010519    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:00.010519    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:00.013059    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:01.013184    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:01.013184    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:01.017608    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:02.018033    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:02.018033    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:02.021448    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:03.022254    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:03.022604    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:03.025475    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:04.026637    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:04.026637    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:04.029792    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:05.030057    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:05.030057    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:05.033922    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:06.034438    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:06.034438    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:06.037480    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:07.038283    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:07.038283    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:07.041280    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:01:07.041328    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:07.041532    3528 type.go:168] "Request Body" body=""
	I1210 06:01:07.041606    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:07.044121    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:08.044522    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:08.044522    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:08.048047    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:09.048331    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:09.048331    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:09.051118    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:10.051651    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:10.051929    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:10.054948    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:11.055145    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:11.055564    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:11.058295    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:12.059200    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:12.059345    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:12.061763    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:13.062357    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:13.062357    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:13.067157    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:14.068007    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:14.068443    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:14.071405    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:15.071610    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:15.071610    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:15.075149    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:16.075929    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:16.075929    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:16.078363    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:17.078629    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:17.078629    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:17.082263    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:01:17.082399    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:17.082476    3528 type.go:168] "Request Body" body=""
	I1210 06:01:17.082601    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:17.084577    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:01:18.085283    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:18.085283    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:18.087761    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:19.089284    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:19.089284    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:19.093369    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:20.094032    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:20.094032    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:20.097108    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:21.097562    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:21.097562    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:21.104228    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	I1210 06:01:22.104512    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:22.104512    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:22.106967    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:23.107603    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:23.107603    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:23.110798    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:24.111778    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:24.111778    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:24.114416    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:25.115471    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:25.115471    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:25.118129    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:26.118485    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:26.118485    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:26.121278    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:27.121884    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:27.121884    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:27.125182    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:01:27.125182    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:27.125182    3528 type.go:168] "Request Body" body=""
	I1210 06:01:27.125182    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:27.127600    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:28.128000    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:28.128000    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:28.131773    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:29.132042    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:29.132453    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:29.135795    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:30.136052    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:30.136052    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:30.140250    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:31.140497    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:31.140975    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:31.143389    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:32.143469    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:32.144131    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:32.148568    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:33.148831    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:33.148831    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:33.152129    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:34.152786    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:34.152786    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:34.156156    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:35.156429    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:35.156429    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:35.159806    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:36.160061    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:36.160061    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:36.163126    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:37.163591    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:37.163591    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:37.166938    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:01:37.166938    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:37.167463    3528 type.go:168] "Request Body" body=""
	I1210 06:01:37.167518    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:37.169655    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:38.169997    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:38.169997    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:38.173075    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:39.173563    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:39.173563    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:39.177056    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:40.177923    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:40.177923    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:40.181566    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:41.182378    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:41.182378    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:41.185302    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:42.185967    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:42.185967    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:42.188700    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:43.189505    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:43.189505    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:43.192705    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:44.193063    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:44.193560    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:44.195918    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:45.196717    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:45.196717    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:45.200077    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:46.200329    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:46.200329    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:46.203250    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:47.204114    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:47.204114    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:47.206151    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:01:47.206151    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:47.206151    3528 type.go:168] "Request Body" body=""
	I1210 06:01:47.206692    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:47.209053    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:48.209387    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:48.209387    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:48.213313    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:49.213608    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:49.213608    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:49.217045    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:50.217195    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:50.217195    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:50.220141    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:51.220422    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:51.220422    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:51.223771    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:52.224601    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:52.224601    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:52.227794    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:53.228750    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:53.228750    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:53.231412    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:54.232114    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:54.232114    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:54.235027    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:55.235579    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:55.235983    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:55.238624    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:56.239321    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:56.239321    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:56.241809    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:57.242257    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:57.242257    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:57.245969    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:01:57.245969    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:57.245969    3528 type.go:168] "Request Body" body=""
	I1210 06:01:57.245969    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:57.248410    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:01:58.249059    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:58.249059    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:58.252337    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:59.252782    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:59.253339    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:59.255908    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:00.256663    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:00.257161    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:00.259603    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:01.260700    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:01.260700    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:01.263908    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:02.263994    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:02.264404    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:02.267730    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:03.268305    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:03.268305    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:03.271419    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:04.271604    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:04.271604    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:04.274704    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:05.275664    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:05.275664    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:05.278947    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:06.280127    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:06.280127    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:06.283728    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:07.284100    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:07.284100    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:07.286782    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:07.286782    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:07.287315    3528 type.go:168] "Request Body" body=""
	I1210 06:02:07.287315    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:07.289712    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:08.290003    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:08.290003    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:08.293335    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:09.293835    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:09.293835    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:09.296504    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:10.296683    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:10.296683    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:10.299600    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:11.300202    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:11.300202    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:11.303557    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:12.305092    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:12.305092    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:12.307542    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:13.308588    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:13.308588    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:13.312484    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:14.312766    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:14.312766    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:14.316277    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:15.317454    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:15.317454    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:15.320383    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:16.320913    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:16.320913    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:16.323576    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:17.323813    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:17.323813    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:17.326985    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:02:17.326985    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:17.326985    3528 type.go:168] "Request Body" body=""
	I1210 06:02:17.326985    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:17.329581    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:18.330187    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:18.330187    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:18.332737    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:19.333031    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:19.333031    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:19.335030    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:02:20.336555    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:20.336555    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:20.339555    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:21.340558    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:21.340558    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:21.342929    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:22.343239    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:22.343724    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:22.346810    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:23.347387    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:23.347387    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:23.350241    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:24.350796    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:24.350796    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:24.353724    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:25.354434    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:25.354772    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:25.357575    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:26.358016    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:26.358016    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:26.361246    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:27.362131    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:27.362479    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:27.365230    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:27.365813    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:27.365813    3528 type.go:168] "Request Body" body=""
	I1210 06:02:27.365813    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:27.368828    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:28.369580    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:28.369580    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:28.372320    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:29.372897    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:29.372897    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:29.376660    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:30.377760    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:30.377760    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:30.380415    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:31.381897    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:31.381897    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:31.385100    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:32.385291    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:32.385291    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:32.387374    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:33.389360    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:33.389360    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:33.393116    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:34.393502    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:34.393502    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:34.396152    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:35.396913    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:35.396913    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:35.401573    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:02:36.402190    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:36.402534    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:36.404711    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:37.405859    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:37.405859    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:37.408704    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:37.408838    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:37.408985    3528 type.go:168] "Request Body" body=""
	I1210 06:02:37.409077    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:37.412442    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:38.413079    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:38.413079    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:38.416332    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:39.416603    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:39.416603    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:39.420060    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:40.420482    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:40.420482    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:40.424152    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:41.424439    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:41.424439    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:41.427960    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:42.428547    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:42.428547    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:42.433716    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1210 06:02:43.434760    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:43.434760    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:43.437305    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:44.437929    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:44.437929    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:44.441911    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:45.442598    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:45.442598    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:45.445386    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:46.445563    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:46.445958    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:46.449188    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:47.450213    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:47.450868    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:47.453841    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:47.453841    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:47.453841    3528 type.go:168] "Request Body" body=""
	I1210 06:02:47.453841    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:47.457634    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:48.457929    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:48.457929    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:48.461148    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:49.461572    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:49.461572    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:49.464368    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:50.465569    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:50.465956    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:50.468785    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:51.469079    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:51.469079    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:51.473246    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:02:52.473693    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:52.473693    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:52.477423    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:53.477937    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:53.477937    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:53.481938    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:02:54.482839    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:54.482839    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:54.485813    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:55.486892    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:55.486892    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:55.490131    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:56.490554    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:56.490554    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:56.493887    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:57.494861    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:57.494861    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:57.497800    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:57.497800    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:57.497998    3528 type.go:168] "Request Body" body=""
	I1210 06:02:57.498076    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:57.500781    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:58.501021    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:58.501021    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:58.504136    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:59.504488    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:59.504969    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:59.507730    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:00.508009    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:00.508009    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:00.511476    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:01.512344    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:01.512344    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:01.515549    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:02.516467    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:02.516467    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:02.520405    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:03.520921    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:03.521256    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:03.524252    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:04.524513    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:04.524953    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:04.527628    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:05.529050    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:05.529050    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:05.536803    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=7
	I1210 06:03:06.537822    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:06.537822    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:06.541195    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:07.541552    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:07.541552    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:07.544874    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:03:07.544874    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:03:07.544874    3528 type.go:168] "Request Body" body=""
	I1210 06:03:07.544874    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:07.548078    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:08.548780    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:08.548969    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:08.551745    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:09.552670    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:09.552670    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:09.556239    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:10.556550    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:10.556906    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:10.559896    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:11.560632    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:11.560632    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:11.563477    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:12.564335    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:12.564335    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:12.567101    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:13.567254    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:13.567254    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:13.570684    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:14.571214    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:14.571214    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:14.573567    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:15.574056    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:15.574401    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:15.577034    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:16.577296    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:16.577296    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:16.580507    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:17.580670    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:17.580670    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:17.584345    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:03:17.584442    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:03:17.584620    3528 type.go:168] "Request Body" body=""
	I1210 06:03:17.584714    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:17.586766    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:18.587485    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:18.587485    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:18.590661    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:19.591695    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:19.592099    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:19.594643    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:20.595361    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:20.595361    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:20.597940    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:21.598595    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:21.598595    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:21.601244    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:22.601730    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:22.601730    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:22.604442    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:23.605664    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:23.605664    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:23.608404    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:24.609206    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:24.609206    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:24.612484    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:25.613066    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:25.613066    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:25.615998    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:03:26.117891    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1210 06:03:26.117891    3528 node_ready.go:38] duration metric: took 6m0.0004685s for node "functional-871500" to be "Ready" ...
	I1210 06:03:26.123026    3528 out.go:203] 
	W1210 06:03:26.125419    3528 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 06:03:26.125419    3528 out.go:285] * 
	W1210 06:03:26.127475    3528 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:03:26.130878    3528 out.go:203] 
	
	
	==> Docker <==
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.483189206Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.483194507Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.483214008Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.483249911Z" level=info msg="Initializing buildkit"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.582637464Z" level=info msg="Completed buildkit initialization"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.589253381Z" level=info msg="Daemon has completed initialization"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.589392791Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.589467497Z" level=info msg="API listen on [::]:2376"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.589490799Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 05:57:22 functional-871500 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 05:57:22 functional-871500 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 05:57:22 functional-871500 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 10 05:57:22 functional-871500 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 10 05:57:23 functional-871500 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Loaded network plugin cni"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 05:57:23 functional-871500 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:03:29.243144   17692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:03:29.244383   17692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:03:29.245935   17692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:03:29.246960   17692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:03:29.248210   17692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001083] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001015] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000962] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000877] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001001] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 05:57] CPU: 2 PID: 55724 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000754] RIP: 0033:0x7fd067afcb20
	[  +0.000396] Code: Unable to access opcode bytes at RIP 0x7fd067afcaf6.
	[  +0.000673] RSP: 002b:00007ffe57c686d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000778] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000787] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000893] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000747] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000734] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000747] FS:  0000000000000000 GS:  0000000000000000
	[  +0.824990] CPU: 8 PID: 55850 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000805] RIP: 0033:0x7f91646e5b20
	[  +0.000401] Code: Unable to access opcode bytes at RIP 0x7f91646e5af6.
	[  +0.000653] RSP: 002b:00007ffe3817fb80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000798] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000820] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000756] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000769] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000760] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001067] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:03:29 up  1:31,  0 user,  load average: 0.39, 0.34, 0.63
	Linux functional-871500 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:03:25 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:03:26 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 816.
	Dec 10 06:03:26 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:03:26 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:03:26 functional-871500 kubelet[17523]: E1210 06:03:26.621744   17523 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:03:26 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:03:26 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:03:27 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 817.
	Dec 10 06:03:27 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:03:27 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:03:27 functional-871500 kubelet[17535]: E1210 06:03:27.355390   17535 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:03:27 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:03:27 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:03:28 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 818.
	Dec 10 06:03:28 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:03:28 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:03:28 functional-871500 kubelet[17562]: E1210 06:03:28.105490   17562 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:03:28 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:03:28 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:03:28 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 819.
	Dec 10 06:03:28 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:03:28 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:03:28 functional-871500 kubelet[17657]: E1210 06:03:28.843516   17657 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:03:28 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:03:28 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500: exit status 2 (597.6711ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-871500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (373.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (53.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-871500 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-871500 get po -A: exit status 1 (50.3738854s)

                                                
                                                
** stderr ** 
	E1210 06:03:41.039750    8632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:03:51.079737    8632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:04:01.121299    8632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:04:11.161567    8632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:04:21.203637    8632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-871500 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1210 06:03:41.039750    8632 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:50086/api?timeout=32s\\\": EOF\"\nE1210 06:03:51.079737    8632 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:50086/api?timeout=32s\\\": EOF\"\nE1210 06:04:01.121299    8632 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:50086/api?timeout=32s\\\": EOF\"\nE1210 06:04:11.161567    8632 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:50086/api?timeout=32s\\\": EOF\"\nE1210 06:04:21.203637    8632 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:50086/api?timeout=32s\\\": EOF\"\nUnable to connect to the server: EOF\n"*: args "kubectl --context functio
nal-871500 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-871500 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-871500
helpers_test.go:244: (dbg) docker inspect functional-871500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8",
	        "Created": "2025-12-10T05:48:42.330122465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43983,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:48:42.614396787Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hosts",
	        "LogPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8-json.log",
	        "Name": "/functional-871500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-871500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-871500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-871500",
	                "Source": "/var/lib/docker/volumes/functional-871500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-871500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-871500",
	                "name.minikube.sigs.k8s.io": "functional-871500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f831997cbc8b0c45b2bf0420afea78f7ce282bcb79cb347c18a4cbe1cbe8de10",
	            "SandboxKey": "/var/run/docker/netns/f831997cbc8b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50082"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50083"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50085"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-871500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e1a843817431186dff999677a77b90e1cb216c0695cd37f7ec217b50cc77b815",
	                    "EndpointID": "c10c5ce0711796e291e0a729879c86799e1ac37c35b14f1d57f23910383e1c22",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-871500",
	                        "47edd36e1aff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500: exit status 2 (595.1938ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 logs -n 25: (1.2020876s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ docker-env     │ functional-493600 docker-env                                                                                                                              │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls                                                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image save kicbase/echo-server:functional-493600 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image rm kicbase/echo-server:functional-493600 --alsologtostderr                                                                        │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls                                                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ service        │ functional-493600 service hello-node --url --format={{.IP}}                                                                                               │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ image          │ functional-493600 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ docker-env     │ functional-493600 docker-env                                                                                                                              │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls                                                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ ssh            │ functional-493600 ssh sudo cat /etc/test/nested/copy/11304/hosts                                                                                          │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image save --daemon kicbase/echo-server:functional-493600 --alsologtostderr                                                             │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ update-context │ functional-493600 update-context --alsologtostderr -v=2                                                                                                   │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ update-context │ functional-493600 update-context --alsologtostderr -v=2                                                                                                   │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ update-context │ functional-493600 update-context --alsologtostderr -v=2                                                                                                   │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls --format short --alsologtostderr                                                                                               │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls --format yaml --alsologtostderr                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ ssh            │ functional-493600 ssh pgrep buildkitd                                                                                                                     │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ image          │ functional-493600 image build -t localhost/my-image:functional-493600 testdata\build --alsologtostderr                                                    │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls                                                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image          │ functional-493600 image ls --format json --alsologtostderr                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ service        │ functional-493600 service hello-node --url                                                                                                                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ image          │ functional-493600 image ls --format table --alsologtostderr                                                                                               │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ delete         │ -p functional-493600                                                                                                                                      │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:48 UTC │ 10 Dec 25 05:48 UTC │
	│ start          │ -p functional-871500 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-rc.1                                     │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:48 UTC │                     │
	│ start          │ -p functional-871500 --alsologtostderr -v=8                                                                                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:57 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:57:16
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:57:16.875847    3528 out.go:360] Setting OutFile to fd 1624 ...
	I1210 05:57:16.917657    3528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:57:16.917657    3528 out.go:374] Setting ErrFile to fd 1612...
	I1210 05:57:16.917657    3528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:57:16.932616    3528 out.go:368] Setting JSON to false
	I1210 05:57:16.934770    3528 start.go:133] hostinfo: {"hostname":"minikube4","uptime":5168,"bootTime":1765341068,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 05:57:16.934770    3528 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 05:57:16.939605    3528 out.go:179] * [functional-871500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 05:57:16.942014    3528 notify.go:221] Checking for updates...
	I1210 05:57:16.946622    3528 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:16.950394    3528 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:57:16.952350    3528 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 05:57:16.955212    3528 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:57:16.957439    3528 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:57:16.962034    3528 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 05:57:16.962229    3528 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:57:17.077929    3528 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 05:57:17.082453    3528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:57:17.310960    3528 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-10 05:57:17.287646185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:57:17.314972    3528 out.go:179] * Using the docker driver based on existing profile
	I1210 05:57:17.316973    3528 start.go:309] selected driver: docker
	I1210 05:57:17.316973    3528 start.go:927] validating driver "docker" against &{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:57:17.316973    3528 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:57:17.322956    3528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:57:17.562979    3528 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-10 05:57:17.536373793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:57:17.650233    3528 cni.go:84] Creating CNI manager for ""
	I1210 05:57:17.650233    3528 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 05:57:17.650860    3528 start.go:353] cluster config:
	{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:57:17.654219    3528 out.go:179] * Starting "functional-871500" primary control-plane node in "functional-871500" cluster
	I1210 05:57:17.656244    3528 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 05:57:17.659128    3528 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:57:17.661459    3528 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:57:17.661459    3528 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 05:57:17.661583    3528 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1210 05:57:17.661583    3528 cache.go:65] Caching tarball of preloaded images
	I1210 05:57:17.661583    3528 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1210 05:57:17.662115    3528 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1210 05:57:17.662465    3528 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\config.json ...
	I1210 05:57:17.734611    3528 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 05:57:17.734611    3528 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 05:57:17.734611    3528 cache.go:243] Successfully downloaded all kic artifacts
	I1210 05:57:17.734611    3528 start.go:360] acquireMachinesLock for functional-871500: {Name:mkaa7072cf669cebcb93feb3e66bf80897472d33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:57:17.735277    3528 start.go:364] duration metric: took 104.4µs to acquireMachinesLock for "functional-871500"
	I1210 05:57:17.735336    3528 start.go:96] Skipping create...Using existing machine configuration
	I1210 05:57:17.735336    3528 fix.go:54] fixHost starting: 
	I1210 05:57:17.741445    3528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:57:17.794847    3528 fix.go:112] recreateIfNeeded on functional-871500: state=Running err=<nil>
	W1210 05:57:17.794847    3528 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 05:57:17.798233    3528 out.go:252] * Updating the running docker "functional-871500" container ...
	I1210 05:57:17.798233    3528 machine.go:94] provisionDockerMachine start ...
	I1210 05:57:17.802052    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:17.859397    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:17.860025    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:17.860025    3528 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:57:18.039007    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-871500
	
	I1210 05:57:18.039007    3528 ubuntu.go:182] provisioning hostname "functional-871500"
	I1210 05:57:18.043768    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:18.100666    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:18.100666    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:18.100666    3528 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-871500 && echo "functional-871500" | sudo tee /etc/hostname
	I1210 05:57:18.283797    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-871500
	
	I1210 05:57:18.287904    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:18.342863    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:18.343348    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:18.343409    3528 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-871500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-871500/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-871500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:57:18.533020    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:57:18.533020    3528 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 05:57:18.533020    3528 ubuntu.go:190] setting up certificates
	I1210 05:57:18.533020    3528 provision.go:84] configureAuth start
	I1210 05:57:18.537250    3528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 05:57:18.595140    3528 provision.go:143] copyHostCerts
	I1210 05:57:18.595839    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1210 05:57:18.596031    3528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 05:57:18.596062    3528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 05:57:18.596239    3528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 05:57:18.596845    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1210 05:57:18.597366    3528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 05:57:18.597406    3528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 05:57:18.597495    3528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 05:57:18.598291    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1210 05:57:18.598291    3528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 05:57:18.598291    3528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 05:57:18.598291    3528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 05:57:18.599093    3528 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-871500 san=[127.0.0.1 192.168.49.2 functional-871500 localhost minikube]
	I1210 05:57:18.702479    3528 provision.go:177] copyRemoteCerts
	I1210 05:57:18.706176    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:57:18.709177    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:18.761464    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:18.886181    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1210 05:57:18.886181    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 05:57:18.914027    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1210 05:57:18.914027    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 05:57:18.939266    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1210 05:57:18.939794    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 05:57:18.968597    3528 provision.go:87] duration metric: took 435.5446ms to configureAuth
	I1210 05:57:18.968633    3528 ubuntu.go:206] setting minikube options for container-runtime
	I1210 05:57:18.969064    3528 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 05:57:18.972714    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:19.026843    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:19.027475    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:19.027475    3528 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 05:57:19.213570    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 05:57:19.213570    3528 ubuntu.go:71] root file system type: overlay
	I1210 05:57:19.213570    3528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 05:57:19.217470    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:19.271762    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:19.271762    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:19.271762    3528 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 05:57:19.465304    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 05:57:19.469988    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:19.524496    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:19.525153    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:19.525153    3528 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 05:57:19.708281    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:57:19.708281    3528 machine.go:97] duration metric: took 1.9100246s to provisionDockerMachine
	I1210 05:57:19.708281    3528 start.go:293] postStartSetup for "functional-871500" (driver="docker")
	I1210 05:57:19.708281    3528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:57:19.712864    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:57:19.716356    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:19.769263    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:19.910607    3528 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:57:19.918702    3528 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 05:57:19.918702    3528 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 05:57:19.918702    3528 command_runner.go:130] > VERSION_ID="12"
	I1210 05:57:19.918702    3528 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 05:57:19.918702    3528 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 05:57:19.918702    3528 command_runner.go:130] > ID=debian
	I1210 05:57:19.918702    3528 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 05:57:19.918702    3528 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 05:57:19.918702    3528 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 05:57:19.918927    3528 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 05:57:19.919018    3528 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 05:57:19.919060    3528 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 05:57:19.919569    3528 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 05:57:19.919739    3528 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 05:57:19.919739    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> /etc/ssl/certs/113042.pem
	I1210 05:57:19.921060    3528 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts -> hosts in /etc/test/nested/copy/11304
	I1210 05:57:19.921102    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts -> /etc/test/nested/copy/11304/hosts
	I1210 05:57:19.926330    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11304
	I1210 05:57:19.937995    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 05:57:19.967462    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts --> /etc/test/nested/copy/11304/hosts (40 bytes)
	I1210 05:57:19.996671    3528 start.go:296] duration metric: took 288.3864ms for postStartSetup
	I1210 05:57:20.001220    3528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:57:20.004094    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:20.057975    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:20.183984    3528 command_runner.go:130] > 1%
	I1210 05:57:20.188612    3528 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 05:57:20.199532    3528 command_runner.go:130] > 950G
	I1210 05:57:20.200170    3528 fix.go:56] duration metric: took 2.4648044s for fixHost
	I1210 05:57:20.200170    3528 start.go:83] releasing machines lock for "functional-871500", held for 2.4648316s
	I1210 05:57:20.204329    3528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 05:57:20.260852    3528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 05:57:20.265678    3528 ssh_runner.go:195] Run: cat /version.json
	I1210 05:57:20.265678    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:20.268055    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:20.318377    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:20.318938    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:20.440815    3528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1210 05:57:20.440815    3528 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 05:57:20.448568    3528 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1210 05:57:20.452774    3528 ssh_runner.go:195] Run: systemctl --version
	I1210 05:57:20.464224    3528 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 05:57:20.464224    3528 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 05:57:20.469738    3528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 05:57:20.478403    3528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 05:57:20.478403    3528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:57:20.483606    3528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:57:20.495780    3528 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 05:57:20.495780    3528 start.go:496] detecting cgroup driver to use...
	I1210 05:57:20.495780    3528 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:57:20.495780    3528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:57:20.518759    3528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1210 05:57:20.523282    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 05:57:20.541393    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1210 05:57:20.546364    3528 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 05:57:20.546364    3528 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 05:57:20.557861    3528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 05:57:20.562880    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 05:57:20.580735    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:57:20.598803    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 05:57:20.615367    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:57:20.637025    3528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:57:20.656757    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 05:57:20.676589    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 05:57:20.695912    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 05:57:20.717653    3528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:57:20.732788    3528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 05:57:20.737410    3528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:57:20.756411    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:20.908020    3528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 05:57:21.078402    3528 start.go:496] detecting cgroup driver to use...
	I1210 05:57:21.078402    3528 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:57:21.083945    3528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 05:57:21.102632    3528 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1210 05:57:21.102632    3528 command_runner.go:130] > [Unit]
	I1210 05:57:21.102632    3528 command_runner.go:130] > Description=Docker Application Container Engine
	I1210 05:57:21.102632    3528 command_runner.go:130] > Documentation=https://docs.docker.com
	I1210 05:57:21.102632    3528 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1210 05:57:21.102632    3528 command_runner.go:130] > Wants=network-online.target containerd.service
	I1210 05:57:21.102632    3528 command_runner.go:130] > Requires=docker.socket
	I1210 05:57:21.102632    3528 command_runner.go:130] > StartLimitBurst=3
	I1210 05:57:21.102632    3528 command_runner.go:130] > StartLimitIntervalSec=60
	I1210 05:57:21.102632    3528 command_runner.go:130] > [Service]
	I1210 05:57:21.102632    3528 command_runner.go:130] > Type=notify
	I1210 05:57:21.102632    3528 command_runner.go:130] > Restart=always
	I1210 05:57:21.102632    3528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1210 05:57:21.102632    3528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1210 05:57:21.102632    3528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1210 05:57:21.102632    3528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1210 05:57:21.102632    3528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1210 05:57:21.102632    3528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1210 05:57:21.102632    3528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1210 05:57:21.102632    3528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1210 05:57:21.102632    3528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1210 05:57:21.102632    3528 command_runner.go:130] > ExecStart=
	I1210 05:57:21.103158    3528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1210 05:57:21.103158    3528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1210 05:57:21.103158    3528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1210 05:57:21.103158    3528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1210 05:57:21.103158    3528 command_runner.go:130] > LimitNOFILE=infinity
	I1210 05:57:21.103158    3528 command_runner.go:130] > LimitNPROC=infinity
	I1210 05:57:21.103378    3528 command_runner.go:130] > LimitCORE=infinity
	I1210 05:57:21.103378    3528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1210 05:57:21.103378    3528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1210 05:57:21.103378    3528 command_runner.go:130] > TasksMax=infinity
	I1210 05:57:21.103378    3528 command_runner.go:130] > TimeoutStartSec=0
	I1210 05:57:21.103378    3528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1210 05:57:21.103378    3528 command_runner.go:130] > Delegate=yes
	I1210 05:57:21.103378    3528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1210 05:57:21.103378    3528 command_runner.go:130] > KillMode=process
	I1210 05:57:21.103378    3528 command_runner.go:130] > OOMScoreAdjust=-500
	I1210 05:57:21.103378    3528 command_runner.go:130] > [Install]
	I1210 05:57:21.103378    3528 command_runner.go:130] > WantedBy=multi-user.target
	I1210 05:57:21.111084    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 05:57:21.134007    3528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 05:57:21.193270    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 05:57:21.218062    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 05:57:21.240026    3528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:57:21.262345    3528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1210 05:57:21.267460    3528 ssh_runner.go:195] Run: which cri-dockerd
	I1210 05:57:21.274915    3528 command_runner.go:130] > /usr/bin/cri-dockerd
	I1210 05:57:21.278860    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 05:57:21.290698    3528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 05:57:21.314565    3528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 05:57:21.466409    3528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 05:57:21.603844    3528 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 05:57:21.603844    3528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 05:57:21.630009    3528 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 05:57:21.650723    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:21.786633    3528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 05:57:22.595739    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:57:22.618130    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 05:57:22.639399    3528 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1210 05:57:22.666084    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 05:57:22.689760    3528 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 05:57:22.826287    3528 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 05:57:22.966482    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:23.147658    3528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 05:57:23.173945    3528 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 05:57:23.199471    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:23.338742    3528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 05:57:23.455945    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 05:57:23.474438    3528 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 05:57:23.478444    3528 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 05:57:23.486000    3528 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1210 05:57:23.486000    3528 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 05:57:23.486000    3528 command_runner.go:130] > Device: 0,112	Inode: 1768        Links: 1
	I1210 05:57:23.486000    3528 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1210 05:57:23.486000    3528 command_runner.go:130] > Access: 2025-12-10 05:57:23.343769342 +0000
	I1210 05:57:23.486000    3528 command_runner.go:130] > Modify: 2025-12-10 05:57:23.343769342 +0000
	I1210 05:57:23.486000    3528 command_runner.go:130] > Change: 2025-12-10 05:57:23.343769342 +0000
	I1210 05:57:23.486000    3528 command_runner.go:130] >  Birth: -
	I1210 05:57:23.486000    3528 start.go:564] Will wait 60s for crictl version
	I1210 05:57:23.490664    3528 ssh_runner.go:195] Run: which crictl
	I1210 05:57:23.496443    3528 command_runner.go:130] > /usr/local/bin/crictl
	I1210 05:57:23.501067    3528 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 05:57:23.549049    3528 command_runner.go:130] > Version:  0.1.0
	I1210 05:57:23.549049    3528 command_runner.go:130] > RuntimeName:  docker
	I1210 05:57:23.549049    3528 command_runner.go:130] > RuntimeVersion:  29.1.2
	I1210 05:57:23.549049    3528 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 05:57:23.549049    3528 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 05:57:23.552780    3528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 05:57:23.592051    3528 command_runner.go:130] > 29.1.2
	I1210 05:57:23.595007    3528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 05:57:23.630739    3528 command_runner.go:130] > 29.1.2
	I1210 05:57:23.635076    3528 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.2 ...
	I1210 05:57:23.638761    3528 cli_runner.go:164] Run: docker exec -t functional-871500 dig +short host.docker.internal
	I1210 05:57:23.765960    3528 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 05:57:23.770487    3528 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 05:57:23.780262    3528 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1210 05:57:23.784121    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:23.838579    3528 kubeadm.go:884] updating cluster {Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:57:23.838579    3528 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 05:57:23.841570    3528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:57:23.871575    3528 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1210 05:57:23.871575    3528 docker.go:621] Images already preloaded, skipping extraction
	I1210 05:57:23.875579    3528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:57:23.907148    3528 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1210 05:57:23.907148    3528 cache_images.go:86] Images are preloaded, skipping loading
	I1210 05:57:23.907148    3528 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1210 05:57:23.907668    3528 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-871500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:57:23.911609    3528 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 05:57:23.978720    3528 command_runner.go:130] > cgroupfs
	I1210 05:57:23.983482    3528 cni.go:84] Creating CNI manager for ""
	I1210 05:57:23.983482    3528 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 05:57:23.983482    3528 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:57:23.983482    3528 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-871500 NodeName:functional-871500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:57:23.983482    3528 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-871500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:57:23.987498    3528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 05:57:24.000182    3528 command_runner.go:130] > kubeadm
	I1210 05:57:24.000182    3528 command_runner.go:130] > kubectl
	I1210 05:57:24.000182    3528 command_runner.go:130] > kubelet
	I1210 05:57:24.000182    3528 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 05:57:24.004093    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:57:24.018408    3528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1210 05:57:24.041215    3528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 05:57:24.061272    3528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1210 05:57:24.082615    3528 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 05:57:24.095804    3528 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 05:57:24.101162    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:24.247994    3528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:57:24.548481    3528 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500 for IP: 192.168.49.2
	I1210 05:57:24.548481    3528 certs.go:195] generating shared ca certs ...
	I1210 05:57:24.549012    3528 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:57:24.549698    3528 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 05:57:24.549774    3528 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 05:57:24.549774    3528 certs.go:257] generating profile certs ...
	I1210 05:57:24.550590    3528 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\client.key
	I1210 05:57:24.550932    3528 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key.53a949a1
	I1210 05:57:24.550932    3528 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key
	I1210 05:57:24.550932    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 05:57:24.550932    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1210 05:57:24.550932    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 05:57:24.551460    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 05:57:24.551604    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 05:57:24.551764    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 05:57:24.551869    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 05:57:24.552075    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 05:57:24.552075    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 05:57:24.552075    3528 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 05:57:24.552617    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 05:57:24.552685    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 05:57:24.552685    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 05:57:24.552685    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 05:57:24.553394    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 05:57:24.553588    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.553766    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem -> /usr/share/ca-certificates/11304.pem
	I1210 05:57:24.553869    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> /usr/share/ca-certificates/113042.pem
	I1210 05:57:24.554786    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:57:24.581958    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:57:24.609312    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:57:24.634601    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 05:57:24.661713    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 05:57:24.690256    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 05:57:24.717784    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:57:24.748075    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 05:57:24.779590    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:57:24.808619    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 05:57:24.838348    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 05:57:24.862790    3528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:57:24.888297    3528 ssh_runner.go:195] Run: openssl version
	I1210 05:57:24.898078    3528 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 05:57:24.902400    3528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.918304    3528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:57:24.936062    3528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.946045    3528 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.946080    3528 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.950017    3528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.993898    3528 command_runner.go:130] > b5213941
	I1210 05:57:24.999156    3528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:57:25.016159    3528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.034260    3528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 05:57:25.053147    3528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.061814    3528 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.061814    3528 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.065786    3528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.108176    3528 command_runner.go:130] > 51391683
	I1210 05:57:25.113321    3528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 05:57:25.129918    3528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.147630    3528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 05:57:25.167521    3528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.176125    3528 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.176125    3528 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.180991    3528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.223232    3528 command_runner.go:130] > 3ec20f2e
	I1210 05:57:25.227937    3528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 05:57:25.244300    3528 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:57:25.251407    3528 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:57:25.251407    3528 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 05:57:25.251407    3528 command_runner.go:130] > Device: 8,48	Inode: 15342       Links: 1
	I1210 05:57:25.251407    3528 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 05:57:25.251407    3528 command_runner.go:130] > Access: 2025-12-10 05:53:12.664767007 +0000
	I1210 05:57:25.251407    3528 command_runner.go:130] > Modify: 2025-12-10 05:49:10.064289884 +0000
	I1210 05:57:25.251407    3528 command_runner.go:130] > Change: 2025-12-10 05:49:10.064289884 +0000
	I1210 05:57:25.251407    3528 command_runner.go:130] >  Birth: 2025-12-10 05:49:10.064289884 +0000
	I1210 05:57:25.255353    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 05:57:25.300587    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.306046    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 05:57:25.348642    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.354977    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 05:57:25.399294    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.403503    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 05:57:25.448300    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.453152    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 05:57:25.506357    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.511028    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 05:57:25.553903    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.554908    3528 kubeadm.go:401] StartCluster: {Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:57:25.558842    3528 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 05:57:25.593738    3528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:57:25.607577    3528 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 05:57:25.607628    3528 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 05:57:25.607628    3528 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 05:57:25.607628    3528 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 05:57:25.607628    3528 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 05:57:25.611091    3528 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 05:57:25.623212    3528 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:57:25.626623    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:25.680358    3528 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-871500" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:25.681186    3528 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-871500" cluster setting kubeconfig missing "functional-871500" context setting]
	I1210 05:57:25.681273    3528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:57:25.700123    3528 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:25.700864    3528 kapi.go:59] client config for functional-871500: &rest.Config{Host:"https://127.0.0.1:50086", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff61ff39080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 05:57:25.702157    3528 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 05:57:25.702219    3528 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 05:57:25.702289    3528 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 05:57:25.702331    3528 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 05:57:25.702331    3528 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 05:57:25.702331    3528 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 05:57:25.706500    3528 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 05:57:25.721533    3528 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1210 05:57:25.721533    3528 kubeadm.go:602] duration metric: took 113.9037ms to restartPrimaryControlPlane
	I1210 05:57:25.721533    3528 kubeadm.go:403] duration metric: took 166.6224ms to StartCluster
	I1210 05:57:25.721533    3528 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:57:25.721533    3528 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:25.722880    3528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:57:25.723468    3528 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 05:57:25.723468    3528 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 05:57:25.723468    3528 addons.go:70] Setting storage-provisioner=true in profile "functional-871500"
	I1210 05:57:25.723468    3528 addons.go:70] Setting default-storageclass=true in profile "functional-871500"
	I1210 05:57:25.723468    3528 addons.go:239] Setting addon storage-provisioner=true in "functional-871500"
	I1210 05:57:25.723990    3528 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-871500"
	I1210 05:57:25.723990    3528 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 05:57:25.724039    3528 host.go:66] Checking if "functional-871500" exists ...
	I1210 05:57:25.727290    3528 out.go:179] * Verifying Kubernetes components...
	I1210 05:57:25.732528    3528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:57:25.733215    3528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:57:25.733847    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:25.784477    3528 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:25.784477    3528 kapi.go:59] client config for functional-871500: &rest.Config{Host:"https://127.0.0.1:50086", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff61ff39080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 05:57:25.785479    3528 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 05:57:25.785479    3528 addons.go:239] Setting addon default-storageclass=true in "functional-871500"
	I1210 05:57:25.785479    3528 host.go:66] Checking if "functional-871500" exists ...
	I1210 05:57:25.792481    3528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:57:25.809483    3528 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:57:25.812486    3528 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:25.812486    3528 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 05:57:25.815477    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:25.843475    3528 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:25.843475    3528 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 05:57:25.846475    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:25.863476    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:25.889481    3528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:57:25.893492    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:25.997793    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:26.023732    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:26.053186    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:26.112921    3528 node_ready.go:35] waiting up to 6m0s for node "functional-871500" to be "Ready" ...
	I1210 05:57:26.112921    3528 type.go:168] "Request Body" body=""
	I1210 05:57:26.113457    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:26.116638    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:26.133091    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.136407    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.136407    3528 retry.go:31] will retry after 345.217772ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.150366    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.202827    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.202827    3528 retry.go:31] will retry after 151.034764ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.359087    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:26.431671    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.436291    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.436291    3528 retry.go:31] will retry after 206.058838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.486383    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:26.557721    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.560620    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.560620    3528 retry.go:31] will retry after 499.995799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.648783    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:26.718122    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.721048    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.721048    3528 retry.go:31] will retry after 393.754282ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.063815    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:27.116921    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:27.116921    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:27.119587    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:27.119858    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:27.142617    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:27.145831    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.145969    3528 retry.go:31] will retry after 468.483229ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.204933    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:27.208432    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.208432    3528 retry.go:31] will retry after 855.193396ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.619421    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:27.706849    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:27.710739    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.710739    3528 retry.go:31] will retry after 912.738336ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:28.069754    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:28.120644    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:28.120644    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:28.123531    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:28.143254    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:28.148927    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:28.148927    3528 retry.go:31] will retry after 983.332816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:28.628567    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:28.701176    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:28.706795    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:28.706795    3528 retry.go:31] will retry after 1.385287928s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:29.123599    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:29.123599    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:29.126305    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:29.136958    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:29.206724    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:29.211387    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:29.211387    3528 retry.go:31] will retry after 1.736840395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:30.096718    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:30.126845    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:30.126845    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:30.129697    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:30.181502    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:30.186062    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:30.186111    3528 retry.go:31] will retry after 1.361370091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:30.954728    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:31.028355    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:31.034556    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:31.034556    3528 retry.go:31] will retry after 1.491617713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:31.130593    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:31.130593    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:31.133462    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:31.553535    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:31.628770    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:31.634748    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:31.634748    3528 retry.go:31] will retry after 3.561022392s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:32.134739    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:32.134739    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:32.138071    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:32.531847    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:32.611685    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:32.617246    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:32.617246    3528 retry.go:31] will retry after 5.95380248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:33.138488    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:33.138875    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:33.141787    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:34.142311    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:34.142734    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:34.145176    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:35.146145    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:35.146145    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:35.148924    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:35.201546    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:35.276874    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:35.281183    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:35.281183    3528 retry.go:31] will retry after 3.730531418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:36.149846    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:36.149846    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:36.152788    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 05:57:36.152788    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:57:36.152788    3528 type.go:168] "Request Body" body=""
	I1210 05:57:36.152788    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:36.155425    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:37.155901    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:37.155901    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:37.159513    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:38.161109    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:38.161109    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:38.164724    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:38.577263    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:38.649489    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:38.652783    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:38.652883    3528 retry.go:31] will retry after 3.457172569s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:39.016926    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:39.102009    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:39.106825    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:39.106825    3528 retry.go:31] will retry after 7.958311304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:39.165052    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:39.165052    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:39.167612    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:40.168385    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:40.168385    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:40.171568    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:41.172124    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:41.172124    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:41.175998    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:42.114835    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:42.176733    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:42.176733    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:42.179377    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:42.194232    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:42.198994    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:42.198994    3528 retry.go:31] will retry after 11.400414998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:43.179774    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:43.179774    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:43.182962    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:44.183364    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:44.183364    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:44.186385    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:45.186936    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:45.187376    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:45.189591    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:46.190096    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:46.190096    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:46.196158    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	W1210 05:57:46.196158    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:57:46.196158    3528 type.go:168] "Request Body" body=""
	I1210 05:57:46.196158    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:46.198622    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:47.071512    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:47.150023    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:47.153571    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:47.153571    3528 retry.go:31] will retry after 8.685329621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:47.199356    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:47.199356    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:47.202855    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:48.203136    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:48.203136    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:48.209086    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1210 05:57:49.209940    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:49.209940    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:49.213512    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:50.214412    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:50.214412    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:50.218493    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:57:51.219009    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:51.219009    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:51.221689    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:52.221931    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:52.221931    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:52.224876    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:53.225848    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:53.225848    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:53.229481    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:53.604916    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:53.684553    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:53.688941    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:53.688941    3528 retry.go:31] will retry after 15.037235136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:54.230291    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:54.230291    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:54.233031    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:55.233749    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:55.233749    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:55.236864    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:55.845563    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:55.917684    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:55.920989    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:55.920989    3528 retry.go:31] will retry after 14.528574699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:56.237162    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:56.237162    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:56.240358    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:57:56.240358    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:57:56.240358    3528 type.go:168] "Request Body" body=""
	I1210 05:57:56.240358    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:56.242693    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:57.243108    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:57.243108    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:57.246459    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:58.247768    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:58.248150    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:58.251587    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:59.252608    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:59.252608    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:59.255751    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:00.256340    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:00.256340    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:00.259424    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:01.260417    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:01.260417    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:01.263835    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:02.264658    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:02.264976    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:02.268894    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:03.269646    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:03.270040    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:03.272742    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:04.273295    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:04.273295    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:04.276636    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:05.277239    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:05.277639    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:05.280629    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:06.281483    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:06.281483    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:06.285745    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1210 05:58:06.285802    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:06.285840    3528 type.go:168] "Request Body" body=""
	I1210 05:58:06.285987    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:06.288564    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:07.289127    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:07.289127    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:07.292563    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:08.293072    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:08.293072    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:08.297241    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:58:08.732392    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:58:08.811298    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:08.814895    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:08.814895    3528 retry.go:31] will retry after 24.059893548s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:09.297667    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:09.297667    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:09.300824    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:10.301402    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:10.301402    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:10.304411    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:10.455124    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:58:10.546239    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:10.546239    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:10.546239    3528 retry.go:31] will retry after 31.876597574s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:11.304978    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:11.304978    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:11.308149    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:12.308734    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:12.308734    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:12.311812    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:13.312561    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:13.313241    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:13.316204    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:14.317485    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:14.317883    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:14.320038    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:15.320460    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:15.320460    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:15.323420    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:16.323723    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:16.323723    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:16.326977    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:16.326977    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:16.327139    3528 type.go:168] "Request Body" body=""
	I1210 05:58:16.327227    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:16.329681    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:17.330932    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:17.330932    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:17.333882    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:18.334334    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:18.334798    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:18.338144    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:19.338534    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:19.338534    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:19.342989    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:58:20.343612    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:20.343612    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:20.346805    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:21.347681    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:21.347681    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:21.350863    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:22.351290    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:22.351290    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:22.354536    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:23.355239    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:23.355239    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:23.358499    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:24.359467    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:24.359467    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:24.364653    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1210 05:58:25.365025    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:25.365025    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:25.368433    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:26.369056    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:26.369056    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:26.372426    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:26.372457    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:26.372457    3528 type.go:168] "Request Body" body=""
	I1210 05:58:26.372457    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:26.374640    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:27.375624    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:27.375624    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:27.379448    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:28.380744    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:28.380744    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:28.384412    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:29.385100    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:29.385455    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:29.388161    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:30.388490    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:30.388490    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:30.391842    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:31.392294    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:31.392294    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:31.395842    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:32.397016    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:32.397016    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:32.399019    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:32.881902    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:58:32.967281    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:32.972519    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:32.972519    3528 retry.go:31] will retry after 41.610684516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:33.399525    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:33.399525    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:33.402804    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:34.403496    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:34.403496    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:34.406699    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:35.406992    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:35.406992    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:35.410007    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:36.410696    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:36.410696    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:36.414578    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:36.414673    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:36.414815    3528 type.go:168] "Request Body" body=""
	I1210 05:58:36.414864    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:36.417495    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:37.417917    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:37.418702    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:37.421367    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:38.421905    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:38.421905    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:38.424630    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:39.425767    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:39.426355    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:39.429576    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:40.429801    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:40.429801    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:40.433301    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:41.433959    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:41.433959    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:41.437621    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:42.429097    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:58:42.438217    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:42.438429    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:42.440917    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:42.509794    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:42.514955    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:42.515232    3528 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 05:58:43.441740    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:43.441740    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:43.444947    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:44.445672    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:44.445672    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:44.449361    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:45.449616    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:45.450071    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:45.452940    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:46.454145    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:46.454503    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:46.458078    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:46.458078    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:46.458078    3528 type.go:168] "Request Body" body=""
	I1210 05:58:46.458078    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:46.460173    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:47.460277    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:47.460277    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:47.462994    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:48.463438    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:48.463438    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:48.466303    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:49.467359    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:49.467359    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:49.471033    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:50.471353    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:50.471932    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:50.474800    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:51.475228    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:51.475228    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:51.478898    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:52.479596    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:52.479596    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:52.483072    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:53.483188    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:53.483188    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:53.486888    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:54.487194    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:54.487194    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:54.489701    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:55.490295    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:55.490295    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:55.494381    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:58:56.495339    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:56.495339    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:56.498522    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:56.498627    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:56.498685    3528 type.go:168] "Request Body" body=""
	I1210 05:58:56.498685    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:56.501262    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:57.501619    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:57.501619    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:57.504510    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:58.504988    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:58.504988    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:58.508232    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:59.508716    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:59.508952    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:59.512196    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:00.512876    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:00.512876    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:00.516262    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:01.516926    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:01.516926    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:01.519996    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:02.520867    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:02.520867    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:02.524453    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:03.525204    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:03.525204    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:03.528417    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:04.528800    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:04.528800    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:04.532496    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:05.533449    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:05.533449    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:05.535518    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:06.535694    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:06.535694    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:06.538826    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:06.538826    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:06.538826    3528 type.go:168] "Request Body" body=""
	I1210 05:59:06.538826    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:06.541732    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:07.542096    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:07.542096    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:07.546090    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:08.546686    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:08.546686    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:08.550042    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:09.551069    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:09.551069    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:09.554772    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:10.555918    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:10.556223    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:10.558373    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:11.559619    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:11.559619    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:11.562909    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:12.563393    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:12.563393    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:12.566949    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:13.567538    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:13.567538    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:13.570615    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:14.571357    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:14.571869    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:14.574910    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:14.588699    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:59:14.659982    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:59:14.662951    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:59:14.662951    3528 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 05:59:14.667272    3528 out.go:179] * Enabled addons: 
	I1210 05:59:14.669291    3528 addons.go:530] duration metric: took 1m48.9444759s for enable addons: enabled=[]
	I1210 05:59:15.575548    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:15.575548    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:15.577957    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:16.578269    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:16.578269    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:16.581535    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:16.581626    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:16.581709    3528 type.go:168] "Request Body" body=""
	I1210 05:59:16.581757    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:16.584351    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:17.585087    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:17.585598    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:17.587811    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:18.588817    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:18.588817    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:18.593150    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:59:19.593863    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:19.593863    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:19.596290    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:20.596979    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:20.597284    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:20.600249    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:21.600500    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:21.600500    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:21.603751    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:22.603880    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:22.603880    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:22.608748    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:59:23.609127    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:23.609447    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:23.612322    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:24.613043    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:24.613043    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:24.616893    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:25.617546    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:25.617895    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:25.620726    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:26.620874    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:26.621261    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:26.624539    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:26.624539    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:26.624539    3528 type.go:168] "Request Body" body=""
	I1210 05:59:26.624539    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:26.627913    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:27.628729    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:27.628729    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:27.631708    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:28.632003    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:28.632003    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:28.635112    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:29.636254    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:29.636254    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:29.640073    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:30.640567    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:30.640567    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:30.643449    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:31.644603    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:31.644603    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:31.648321    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:32.648642    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:32.649007    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:32.651241    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:33.652555    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:33.652555    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:33.655647    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:34.656445    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:34.656445    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:34.659525    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:35.660470    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:35.660769    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:35.663511    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:36.663841    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:36.663841    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:36.667272    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:36.667368    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:36.667437    3528 type.go:168] "Request Body" body=""
	I1210 05:59:36.667551    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:36.671515    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:37.671899    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:37.671899    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:37.676101    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:59:38.676473    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:38.676473    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:38.679323    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:39.679890    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:39.679890    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:39.682898    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:40.683418    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:40.683418    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:40.687065    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:41.687380    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:41.687380    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:41.690398    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:42.691292    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:42.691292    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:42.693967    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:43.694336    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:43.694336    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:43.697547    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:44.697757    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:44.697757    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:44.700896    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:45.701213    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:45.701677    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:45.704167    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:46.704767    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:46.705237    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:46.708460    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:46.709023    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:46.709124    3528 type.go:168] "Request Body" body=""
	I1210 05:59:46.709198    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:46.711537    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:47.711814    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:47.712045    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:47.715217    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:48.716360    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:48.716360    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:48.719060    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:49.719847    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:49.719847    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:49.723779    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:50.724269    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:50.724269    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:50.728439    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:59:51.729126    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:51.729126    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:51.732791    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:52.734110    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:52.734110    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:52.738074    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:53.738271    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:53.738271    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:53.741809    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:54.742174    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:54.742174    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:54.746052    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:55.747079    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:55.747079    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:55.750285    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:56.750719    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:56.750719    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:56.753273    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 05:59:56.753273    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:56.753273    3528 type.go:168] "Request Body" body=""
	I1210 05:59:56.753273    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:56.755741    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:57.757283    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:57.757592    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:57.759856    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:58.761013    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:58.761013    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:58.764032    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:59.764386    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:59.764386    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:59.767579    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:00.767741    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:00.767741    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:00.771607    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:01.771831    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:01.771831    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:01.775356    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:02.775642    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:02.775642    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:02.779145    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:03.779411    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:03.779411    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:03.783151    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:04.783296    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:04.783296    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:04.786762    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:05.787153    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:05.787153    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:05.790518    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:06.790834    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:06.790834    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:06.794128    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:06.794128    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:06.794128    3528 type.go:168] "Request Body" body=""
	I1210 06:00:06.794660    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:06.796765    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:07.797318    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:07.797318    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:07.800177    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:08.801465    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:08.801465    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:08.804595    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:09.805061    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:09.805401    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:09.807835    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:10.808649    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:10.808991    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:10.811366    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:00:11.811812    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:11.811812    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:11.815185    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:12.815710    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:12.815710    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:12.819741    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:13.820205    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:13.820482    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:13.823243    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:14.823451    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:14.823451    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:14.826552    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:15.827102    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:15.827102    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:15.830239    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:16.830899    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:16.830899    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:16.833829    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:00:16.833829    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:16.834466    3528 type.go:168] "Request Body" body=""
	I1210 06:00:16.834489    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:16.836240    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:00:17.836565    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:17.836565    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:17.840343    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:18.840710    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:18.841040    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:18.844672    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:19.845082    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:19.845358    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:19.846852    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:00:20.848265    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:20.848265    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:20.851784    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:21.852223    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:21.852223    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:21.855023    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:22.856027    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:22.856027    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:22.859873    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:23.860923    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:23.860923    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:23.864261    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:24.864916    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:24.864916    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:24.868305    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:25.869078    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:25.869078    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:25.871509    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:26.871824    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:26.871824    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:26.875349    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:26.875349    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:26.875881    3528 type.go:168] "Request Body" body=""
	I1210 06:00:26.876020    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:26.878104    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:27.878306    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:27.878306    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:27.881296    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:28.882571    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:28.882571    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:28.885774    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:29.885982    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:29.885982    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:29.889065    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:30.889836    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:30.889836    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:30.892524    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:31.893215    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:31.893546    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:31.895912    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:32.897363    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:32.897363    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:32.900093    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:33.900778    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:33.900778    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:33.903568    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:34.904276    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:34.904276    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:34.907470    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:35.909284    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:35.909284    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:35.912316    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:36.913041    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:36.913041    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:36.916114    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:36.916643    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:36.916788    3528 type.go:168] "Request Body" body=""
	I1210 06:00:36.916788    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:36.918746    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:00:37.918978    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:37.918978    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:37.922454    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:38.922847    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:38.923075    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:38.926196    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:39.926491    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:39.926491    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:39.929932    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:40.930368    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:40.930368    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:40.934200    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:41.934738    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:41.934738    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:41.938126    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:42.939113    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:42.939113    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:42.941791    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:43.941991    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:43.942311    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:43.945177    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:44.945677    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:44.945677    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:44.949097    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:45.949865    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:45.950099    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:45.953257    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:46.953679    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:46.953679    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:46.957085    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:46.957085    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:46.957085    3528 type.go:168] "Request Body" body=""
	I1210 06:00:46.957085    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:46.959580    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:47.960148    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:47.960373    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:47.963579    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:48.964463    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:48.964463    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:48.967395    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:49.967782    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:49.967782    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:49.970748    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:50.971566    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:50.971566    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:50.974845    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:51.975483    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:51.976062    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:51.980347    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:00:52.980545    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:52.980545    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:52.983731    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:53.984073    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:53.984391    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:53.987349    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:54.988244    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:54.988244    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:54.992602    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:00:55.993170    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:55.993170    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:55.996092    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:56.996214    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:56.996214    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:56.999523    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:56.999523    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:57.000058    3528 type.go:168] "Request Body" body=""
	I1210 06:00:57.000148    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:57.002201    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:58.003156    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:58.003156    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:58.005615    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:59.006304    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:59.006304    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:59.009503    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:00.010519    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:00.010519    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:00.013059    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:01.013184    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:01.013184    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:01.017608    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:02.018033    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:02.018033    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:02.021448    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:03.022254    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:03.022604    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:03.025475    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:04.026637    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:04.026637    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:04.029792    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:05.030057    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:05.030057    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:05.033922    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:06.034438    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:06.034438    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:06.037480    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:07.038283    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:07.038283    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:07.041280    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:01:07.041328    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:07.041532    3528 type.go:168] "Request Body" body=""
	I1210 06:01:07.041606    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:07.044121    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:08.044522    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:08.044522    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:08.048047    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:09.048331    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:09.048331    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:09.051118    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:10.051651    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:10.051929    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:10.054948    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:11.055145    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:11.055564    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:11.058295    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:12.059200    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:12.059345    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:12.061763    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:13.062357    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:13.062357    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:13.067157    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:14.068007    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:14.068443    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:14.071405    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:15.071610    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:15.071610    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:15.075149    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:16.075929    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:16.075929    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:16.078363    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:17.078629    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:17.078629    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:17.082263    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:01:17.082399    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:17.082476    3528 type.go:168] "Request Body" body=""
	I1210 06:01:17.082601    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:17.084577    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:01:18.085283    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:18.085283    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:18.087761    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:19.089284    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:19.089284    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:19.093369    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:20.094032    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:20.094032    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:20.097108    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:21.097562    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:21.097562    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:21.104228    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	I1210 06:01:22.104512    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:22.104512    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:22.106967    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:23.107603    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:23.107603    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:23.110798    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:24.111778    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:24.111778    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:24.114416    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:25.115471    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:25.115471    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:25.118129    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:26.118485    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:26.118485    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:26.121278    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:27.121884    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:27.121884    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:27.125182    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:01:27.125182    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:27.125182    3528 type.go:168] "Request Body" body=""
	I1210 06:01:27.125182    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:27.127600    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:28.128000    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:28.128000    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:28.131773    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:29.132042    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:29.132453    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:29.135795    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:30.136052    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:30.136052    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:30.140250    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:31.140497    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:31.140975    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:31.143389    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:32.143469    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:32.144131    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:32.148568    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:33.148831    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:33.148831    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:33.152129    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:34.152786    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:34.152786    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:34.156156    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:35.156429    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:35.156429    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:35.159806    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:36.160061    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:36.160061    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:36.163126    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:37.163591    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:37.163591    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:37.166938    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:01:37.166938    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:37.167463    3528 type.go:168] "Request Body" body=""
	I1210 06:01:37.167518    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:37.169655    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:38.169997    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:38.169997    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:38.173075    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:39.173563    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:39.173563    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:39.177056    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:40.177923    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:40.177923    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:40.181566    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:41.182378    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:41.182378    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:41.185302    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:42.185967    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:42.185967    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:42.188700    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:43.189505    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:43.189505    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:43.192705    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:44.193063    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:44.193560    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:44.195918    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:45.196717    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:45.196717    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:45.200077    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:46.200329    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:46.200329    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:46.203250    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:47.204114    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:47.204114    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:47.206151    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:01:47.206151    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:47.206151    3528 type.go:168] "Request Body" body=""
	I1210 06:01:47.206692    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:47.209053    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:48.209387    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:48.209387    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:48.213313    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:49.213608    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:49.213608    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:49.217045    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:50.217195    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:50.217195    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:50.220141    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:51.220422    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:51.220422    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:51.223771    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:52.224601    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:52.224601    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:52.227794    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:53.228750    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:53.228750    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:53.231412    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:54.232114    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:54.232114    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:54.235027    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:55.235579    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:55.235983    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:55.238624    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:56.239321    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:56.239321    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:56.241809    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:57.242257    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:57.242257    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:57.245969    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:01:57.245969    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:57.245969    3528 type.go:168] "Request Body" body=""
	I1210 06:01:57.245969    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:57.248410    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:01:58.249059    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:58.249059    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:58.252337    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:59.252782    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:59.253339    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:59.255908    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:00.256663    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:00.257161    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:00.259603    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:01.260700    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:01.260700    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:01.263908    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:02.263994    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:02.264404    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:02.267730    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:03.268305    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:03.268305    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:03.271419    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:04.271604    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:04.271604    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:04.274704    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:05.275664    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:05.275664    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:05.278947    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:06.280127    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:06.280127    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:06.283728    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:07.284100    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:07.284100    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:07.286782    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:07.286782    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:07.287315    3528 type.go:168] "Request Body" body=""
	I1210 06:02:07.287315    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:07.289712    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:08.290003    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:08.290003    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:08.293335    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:09.293835    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:09.293835    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:09.296504    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:10.296683    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:10.296683    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:10.299600    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:11.300202    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:11.300202    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:11.303557    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:12.305092    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:12.305092    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:12.307542    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:13.308588    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:13.308588    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:13.312484    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:14.312766    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:14.312766    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:14.316277    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:15.317454    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:15.317454    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:15.320383    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:16.320913    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:16.320913    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:16.323576    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:17.323813    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:17.323813    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:17.326985    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:02:17.326985    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:17.326985    3528 type.go:168] "Request Body" body=""
	I1210 06:02:17.326985    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:17.329581    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:18.330187    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:18.330187    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:18.332737    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:19.333031    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:19.333031    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:19.335030    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:02:20.336555    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:20.336555    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:20.339555    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:21.340558    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:21.340558    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:21.342929    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:22.343239    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:22.343724    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:22.346810    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:23.347387    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:23.347387    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:23.350241    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:24.350796    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:24.350796    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:24.353724    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:25.354434    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:25.354772    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:25.357575    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:26.358016    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:26.358016    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:26.361246    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:27.362131    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:27.362479    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:27.365230    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:27.365813    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:27.365813    3528 type.go:168] "Request Body" body=""
	I1210 06:02:27.365813    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:27.368828    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:28.369580    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:28.369580    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:28.372320    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:29.372897    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:29.372897    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:29.376660    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:30.377760    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:30.377760    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:30.380415    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:31.381897    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:31.381897    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:31.385100    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:32.385291    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:32.385291    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:32.387374    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:33.389360    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:33.389360    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:33.393116    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:34.393502    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:34.393502    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:34.396152    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:35.396913    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:35.396913    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:35.401573    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:02:36.402190    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:36.402534    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:36.404711    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:37.405859    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:37.405859    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:37.408704    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:37.408838    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:37.408985    3528 type.go:168] "Request Body" body=""
	I1210 06:02:37.409077    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:37.412442    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:38.413079    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:38.413079    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:38.416332    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:39.416603    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:39.416603    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:39.420060    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:40.420482    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:40.420482    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:40.424152    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:41.424439    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:41.424439    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:41.427960    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:42.428547    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:42.428547    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:42.433716    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1210 06:02:43.434760    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:43.434760    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:43.437305    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:44.437929    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:44.437929    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:44.441911    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:45.442598    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:45.442598    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:45.445386    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:46.445563    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:46.445958    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:46.449188    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:47.450213    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:47.450868    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:47.453841    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:47.453841    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:47.453841    3528 type.go:168] "Request Body" body=""
	I1210 06:02:47.453841    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:47.457634    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:48.457929    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:48.457929    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:48.461148    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:49.461572    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:49.461572    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:49.464368    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:50.465569    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:50.465956    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:50.468785    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:51.469079    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:51.469079    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:51.473246    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:02:52.473693    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:52.473693    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:52.477423    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:53.477937    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:53.477937    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:53.481938    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:02:54.482839    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:54.482839    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:54.485813    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:55.486892    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:55.486892    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:55.490131    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:56.490554    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:56.490554    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:56.493887    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:57.494861    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:57.494861    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:57.497800    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:57.497800    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:57.497998    3528 type.go:168] "Request Body" body=""
	I1210 06:02:57.498076    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:57.500781    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:58.501021    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:58.501021    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:58.504136    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:59.504488    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:59.504969    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:59.507730    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:00.508009    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:00.508009    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:00.511476    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:01.512344    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:01.512344    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:01.515549    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:02.516467    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:02.516467    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:02.520405    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:03.520921    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:03.521256    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:03.524252    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:04.524513    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:04.524953    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:04.527628    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:05.529050    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:05.529050    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:05.536803    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=7
	I1210 06:03:06.537822    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:06.537822    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:06.541195    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:07.541552    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:07.541552    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:07.544874    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:03:07.544874    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:03:07.544874    3528 type.go:168] "Request Body" body=""
	I1210 06:03:07.544874    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:07.548078    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:08.548780    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:08.548969    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:08.551745    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:09.552670    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:09.552670    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:09.556239    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:10.556550    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:10.556906    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:10.559896    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:11.560632    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:11.560632    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:11.563477    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:12.564335    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:12.564335    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:12.567101    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:13.567254    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:13.567254    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:13.570684    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:14.571214    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:14.571214    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:14.573567    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:15.574056    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:15.574401    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:15.577034    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:16.577296    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:16.577296    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:16.580507    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:17.580670    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:17.580670    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:17.584345    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:03:17.584442    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:03:17.584620    3528 type.go:168] "Request Body" body=""
	I1210 06:03:17.584714    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:17.586766    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:18.587485    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:18.587485    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:18.590661    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:19.591695    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:19.592099    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:19.594643    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:20.595361    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:20.595361    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:20.597940    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:21.598595    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:21.598595    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:21.601244    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:22.601730    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:22.601730    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:22.604442    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:23.605664    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:23.605664    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:23.608404    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:24.609206    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:24.609206    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:24.612484    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:25.613066    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:25.613066    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:25.615998    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:03:26.117891    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1210 06:03:26.117891    3528 node_ready.go:38] duration metric: took 6m0.0004685s for node "functional-871500" to be "Ready" ...
	I1210 06:03:26.123026    3528 out.go:203] 
	W1210 06:03:26.125419    3528 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 06:03:26.125419    3528 out.go:285] * 
	W1210 06:03:26.127475    3528 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:03:26.130878    3528 out.go:203] 
	
	
	==> Docker <==
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.483189206Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.483194507Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.483214008Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.483249911Z" level=info msg="Initializing buildkit"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.582637464Z" level=info msg="Completed buildkit initialization"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.589253381Z" level=info msg="Daemon has completed initialization"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.589392791Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.589467497Z" level=info msg="API listen on [::]:2376"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.589490799Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 05:57:22 functional-871500 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 05:57:22 functional-871500 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 05:57:22 functional-871500 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 10 05:57:22 functional-871500 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 10 05:57:23 functional-871500 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Loaded network plugin cni"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 05:57:23 functional-871500 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:04:22.922916   18680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:04:22.924505   18680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:04:22.926853   18680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:04:22.928232   18680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:04:22.929517   18680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001083] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001015] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000962] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000877] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001001] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 05:57] CPU: 2 PID: 55724 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000754] RIP: 0033:0x7fd067afcb20
	[  +0.000396] Code: Unable to access opcode bytes at RIP 0x7fd067afcaf6.
	[  +0.000673] RSP: 002b:00007ffe57c686d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000778] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000787] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000893] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000747] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000734] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000747] FS:  0000000000000000 GS:  0000000000000000
	[  +0.824990] CPU: 8 PID: 55850 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000805] RIP: 0033:0x7f91646e5b20
	[  +0.000401] Code: Unable to access opcode bytes at RIP 0x7f91646e5af6.
	[  +0.000653] RSP: 002b:00007ffe3817fb80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000798] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000820] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000756] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000769] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000760] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001067] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:04:22 up  1:32,  0 user,  load average: 0.24, 0.30, 0.60
	Linux functional-871500 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:04:19 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:04:20 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 888.
	Dec 10 06:04:20 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:04:20 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:04:20 functional-871500 kubelet[18525]: E1210 06:04:20.584499   18525 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:04:20 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:04:20 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:04:21 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 889.
	Dec 10 06:04:21 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:04:21 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:04:21 functional-871500 kubelet[18538]: E1210 06:04:21.362193   18538 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:04:21 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:04:21 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:04:22 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 890.
	Dec 10 06:04:22 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:04:22 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:04:22 functional-871500 kubelet[18566]: E1210 06:04:22.097022   18566 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:04:22 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:04:22 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:04:22 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 891.
	Dec 10 06:04:22 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:04:22 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:04:22 functional-871500 kubelet[18653]: E1210 06:04:22.843613   18653 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:04:22 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:04:22 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500: exit status 2 (596.6177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-871500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (53.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (54.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 kubectl -- --context functional-871500 get pods
E1210 06:04:45.900288   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:731: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-871500 kubectl -- --context functional-871500 get pods: exit status 1 (50.5879923s)

                                                
                                                
** stderr ** 
	E1210 06:04:53.817990   10032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:05:03.900092   10032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:05:13.942868   10032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:05:23.983345   10032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:05:34.024588   10032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-windows-amd64.exe -p functional-871500 kubectl -- --context functional-871500 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-871500
helpers_test.go:244: (dbg) docker inspect functional-871500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8",
	        "Created": "2025-12-10T05:48:42.330122465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43983,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:48:42.614396787Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hosts",
	        "LogPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8-json.log",
	        "Name": "/functional-871500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-871500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-871500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-871500",
	                "Source": "/var/lib/docker/volumes/functional-871500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-871500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-871500",
	                "name.minikube.sigs.k8s.io": "functional-871500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f831997cbc8b0c45b2bf0420afea78f7ce282bcb79cb347c18a4cbe1cbe8de10",
	            "SandboxKey": "/var/run/docker/netns/f831997cbc8b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50082"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50083"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50085"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-871500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e1a843817431186dff999677a77b90e1cb216c0695cd37f7ec217b50cc77b815",
	                    "EndpointID": "c10c5ce0711796e291e0a729879c86799e1ac37c35b14f1d57f23910383e1c22",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-871500",
	                        "47edd36e1aff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500: exit status 2 (651.4486ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 logs -n 25: (1.6527166s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-493600 image ls --format yaml --alsologtostderr                                                            │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ ssh     │ functional-493600 ssh pgrep buildkitd                                                                                 │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ image   │ functional-493600 image build -t localhost/my-image:functional-493600 testdata\build --alsologtostderr                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image   │ functional-493600 image ls                                                                                            │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image   │ functional-493600 image ls --format json --alsologtostderr                                                            │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ service │ functional-493600 service hello-node --url                                                                            │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ image   │ functional-493600 image ls --format table --alsologtostderr                                                           │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ delete  │ -p functional-493600                                                                                                  │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:48 UTC │ 10 Dec 25 05:48 UTC │
	│ start   │ -p functional-871500 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-rc.1 │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:48 UTC │                     │
	│ start   │ -p functional-871500 --alsologtostderr -v=8                                                                           │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:57 UTC │                     │
	│ cache   │ functional-871500 cache add registry.k8s.io/pause:3.1                                                                 │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ functional-871500 cache add registry.k8s.io/pause:3.3                                                                 │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ functional-871500 cache add registry.k8s.io/pause:latest                                                              │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ functional-871500 cache add minikube-local-cache-test:functional-871500                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ functional-871500 cache delete minikube-local-cache-test:functional-871500                                            │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                      │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ list                                                                                                                  │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo crictl images                                                                              │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo docker rmi registry.k8s.io/pause:latest                                                    │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │                     │
	│ cache   │ functional-871500 cache reload                                                                                        │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                      │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                   │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ kubectl │ functional-871500 kubectl -- --context functional-871500 get pods                                                     │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:57:16
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:57:16.875847    3528 out.go:360] Setting OutFile to fd 1624 ...
	I1210 05:57:16.917657    3528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:57:16.917657    3528 out.go:374] Setting ErrFile to fd 1612...
	I1210 05:57:16.917657    3528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:57:16.932616    3528 out.go:368] Setting JSON to false
	I1210 05:57:16.934770    3528 start.go:133] hostinfo: {"hostname":"minikube4","uptime":5168,"bootTime":1765341068,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 05:57:16.934770    3528 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 05:57:16.939605    3528 out.go:179] * [functional-871500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 05:57:16.942014    3528 notify.go:221] Checking for updates...
	I1210 05:57:16.946622    3528 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:16.950394    3528 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:57:16.952350    3528 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 05:57:16.955212    3528 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:57:16.957439    3528 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:57:16.962034    3528 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 05:57:16.962229    3528 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:57:17.077929    3528 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 05:57:17.082453    3528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:57:17.310960    3528 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-10 05:57:17.287646185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:57:17.314972    3528 out.go:179] * Using the docker driver based on existing profile
	I1210 05:57:17.316973    3528 start.go:309] selected driver: docker
	I1210 05:57:17.316973    3528 start.go:927] validating driver "docker" against &{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:57:17.316973    3528 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:57:17.322956    3528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:57:17.562979    3528 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-10 05:57:17.536373793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:57:17.650233    3528 cni.go:84] Creating CNI manager for ""
	I1210 05:57:17.650233    3528 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 05:57:17.650860    3528 start.go:353] cluster config:
	{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:57:17.654219    3528 out.go:179] * Starting "functional-871500" primary control-plane node in "functional-871500" cluster
	I1210 05:57:17.656244    3528 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 05:57:17.659128    3528 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:57:17.661459    3528 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:57:17.661459    3528 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 05:57:17.661583    3528 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1210 05:57:17.661583    3528 cache.go:65] Caching tarball of preloaded images
	I1210 05:57:17.661583    3528 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1210 05:57:17.662115    3528 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1210 05:57:17.662465    3528 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\config.json ...
	I1210 05:57:17.734611    3528 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 05:57:17.734611    3528 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 05:57:17.734611    3528 cache.go:243] Successfully downloaded all kic artifacts
	I1210 05:57:17.734611    3528 start.go:360] acquireMachinesLock for functional-871500: {Name:mkaa7072cf669cebcb93feb3e66bf80897472d33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:57:17.735277    3528 start.go:364] duration metric: took 104.4µs to acquireMachinesLock for "functional-871500"
	I1210 05:57:17.735336    3528 start.go:96] Skipping create...Using existing machine configuration
	I1210 05:57:17.735336    3528 fix.go:54] fixHost starting: 
	I1210 05:57:17.741445    3528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:57:17.794847    3528 fix.go:112] recreateIfNeeded on functional-871500: state=Running err=<nil>
	W1210 05:57:17.794847    3528 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 05:57:17.798233    3528 out.go:252] * Updating the running docker "functional-871500" container ...
	I1210 05:57:17.798233    3528 machine.go:94] provisionDockerMachine start ...
	I1210 05:57:17.802052    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:17.859397    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:17.860025    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:17.860025    3528 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:57:18.039007    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-871500
	
	I1210 05:57:18.039007    3528 ubuntu.go:182] provisioning hostname "functional-871500"
	I1210 05:57:18.043768    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:18.100666    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:18.100666    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:18.100666    3528 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-871500 && echo "functional-871500" | sudo tee /etc/hostname
	I1210 05:57:18.283797    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-871500
	
	I1210 05:57:18.287904    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:18.342863    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:18.343348    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:18.343409    3528 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-871500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-871500/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-871500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:57:18.533020    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:57:18.533020    3528 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 05:57:18.533020    3528 ubuntu.go:190] setting up certificates
	I1210 05:57:18.533020    3528 provision.go:84] configureAuth start
	I1210 05:57:18.537250    3528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 05:57:18.595140    3528 provision.go:143] copyHostCerts
	I1210 05:57:18.595839    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1210 05:57:18.596031    3528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 05:57:18.596062    3528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 05:57:18.596239    3528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 05:57:18.596845    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1210 05:57:18.597366    3528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 05:57:18.597406    3528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 05:57:18.597495    3528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 05:57:18.598291    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1210 05:57:18.598291    3528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 05:57:18.598291    3528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 05:57:18.598291    3528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 05:57:18.599093    3528 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-871500 san=[127.0.0.1 192.168.49.2 functional-871500 localhost minikube]
	I1210 05:57:18.702479    3528 provision.go:177] copyRemoteCerts
	I1210 05:57:18.706176    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:57:18.709177    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:18.761464    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:18.886181    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1210 05:57:18.886181    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 05:57:18.914027    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1210 05:57:18.914027    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 05:57:18.939266    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1210 05:57:18.939794    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 05:57:18.968597    3528 provision.go:87] duration metric: took 435.5446ms to configureAuth
	I1210 05:57:18.968633    3528 ubuntu.go:206] setting minikube options for container-runtime
	I1210 05:57:18.969064    3528 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 05:57:18.972714    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:19.026843    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:19.027475    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:19.027475    3528 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 05:57:19.213570    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 05:57:19.213570    3528 ubuntu.go:71] root file system type: overlay
	I1210 05:57:19.213570    3528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 05:57:19.217470    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:19.271762    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:19.271762    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:19.271762    3528 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 05:57:19.465304    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 05:57:19.469988    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:19.524496    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:19.525153    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:19.525153    3528 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 05:57:19.708281    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:57:19.708281    3528 machine.go:97] duration metric: took 1.9100246s to provisionDockerMachine
	I1210 05:57:19.708281    3528 start.go:293] postStartSetup for "functional-871500" (driver="docker")
	I1210 05:57:19.708281    3528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:57:19.712864    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:57:19.716356    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:19.769263    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:19.910607    3528 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:57:19.918702    3528 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 05:57:19.918702    3528 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 05:57:19.918702    3528 command_runner.go:130] > VERSION_ID="12"
	I1210 05:57:19.918702    3528 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 05:57:19.918702    3528 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 05:57:19.918702    3528 command_runner.go:130] > ID=debian
	I1210 05:57:19.918702    3528 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 05:57:19.918702    3528 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 05:57:19.918702    3528 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 05:57:19.918927    3528 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 05:57:19.919018    3528 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 05:57:19.919060    3528 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 05:57:19.919569    3528 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 05:57:19.919739    3528 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 05:57:19.919739    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> /etc/ssl/certs/113042.pem
	I1210 05:57:19.921060    3528 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts -> hosts in /etc/test/nested/copy/11304
	I1210 05:57:19.921102    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts -> /etc/test/nested/copy/11304/hosts
	I1210 05:57:19.926330    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11304
	I1210 05:57:19.937995    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 05:57:19.967462    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts --> /etc/test/nested/copy/11304/hosts (40 bytes)
	I1210 05:57:19.996671    3528 start.go:296] duration metric: took 288.3864ms for postStartSetup
	I1210 05:57:20.001220    3528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:57:20.004094    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:20.057975    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:20.183984    3528 command_runner.go:130] > 1%
	I1210 05:57:20.188612    3528 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 05:57:20.199532    3528 command_runner.go:130] > 950G
	I1210 05:57:20.200170    3528 fix.go:56] duration metric: took 2.4648044s for fixHost
	I1210 05:57:20.200170    3528 start.go:83] releasing machines lock for "functional-871500", held for 2.4648316s
	I1210 05:57:20.204329    3528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 05:57:20.260852    3528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 05:57:20.265678    3528 ssh_runner.go:195] Run: cat /version.json
	I1210 05:57:20.265678    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:20.268055    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:20.318377    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:20.318938    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:20.440815    3528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1210 05:57:20.440815    3528 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 05:57:20.448568    3528 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1210 05:57:20.452774    3528 ssh_runner.go:195] Run: systemctl --version
	I1210 05:57:20.464224    3528 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 05:57:20.464224    3528 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 05:57:20.469738    3528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 05:57:20.478403    3528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 05:57:20.478403    3528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:57:20.483606    3528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:57:20.495780    3528 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 05:57:20.495780    3528 start.go:496] detecting cgroup driver to use...
	I1210 05:57:20.495780    3528 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:57:20.495780    3528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:57:20.518759    3528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1210 05:57:20.523282    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 05:57:20.541393    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1210 05:57:20.546364    3528 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 05:57:20.546364    3528 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 05:57:20.557861    3528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 05:57:20.562880    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 05:57:20.580735    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:57:20.598803    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 05:57:20.615367    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:57:20.637025    3528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:57:20.656757    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 05:57:20.676589    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 05:57:20.695912    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 05:57:20.717653    3528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:57:20.732788    3528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 05:57:20.737410    3528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:57:20.756411    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:20.908020    3528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 05:57:21.078402    3528 start.go:496] detecting cgroup driver to use...
	I1210 05:57:21.078402    3528 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:57:21.083945    3528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 05:57:21.102632    3528 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1210 05:57:21.102632    3528 command_runner.go:130] > [Unit]
	I1210 05:57:21.102632    3528 command_runner.go:130] > Description=Docker Application Container Engine
	I1210 05:57:21.102632    3528 command_runner.go:130] > Documentation=https://docs.docker.com
	I1210 05:57:21.102632    3528 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1210 05:57:21.102632    3528 command_runner.go:130] > Wants=network-online.target containerd.service
	I1210 05:57:21.102632    3528 command_runner.go:130] > Requires=docker.socket
	I1210 05:57:21.102632    3528 command_runner.go:130] > StartLimitBurst=3
	I1210 05:57:21.102632    3528 command_runner.go:130] > StartLimitIntervalSec=60
	I1210 05:57:21.102632    3528 command_runner.go:130] > [Service]
	I1210 05:57:21.102632    3528 command_runner.go:130] > Type=notify
	I1210 05:57:21.102632    3528 command_runner.go:130] > Restart=always
	I1210 05:57:21.102632    3528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1210 05:57:21.102632    3528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1210 05:57:21.102632    3528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1210 05:57:21.102632    3528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1210 05:57:21.102632    3528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1210 05:57:21.102632    3528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1210 05:57:21.102632    3528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1210 05:57:21.102632    3528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1210 05:57:21.102632    3528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1210 05:57:21.102632    3528 command_runner.go:130] > ExecStart=
	I1210 05:57:21.103158    3528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1210 05:57:21.103158    3528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1210 05:57:21.103158    3528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1210 05:57:21.103158    3528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1210 05:57:21.103158    3528 command_runner.go:130] > LimitNOFILE=infinity
	I1210 05:57:21.103158    3528 command_runner.go:130] > LimitNPROC=infinity
	I1210 05:57:21.103378    3528 command_runner.go:130] > LimitCORE=infinity
	I1210 05:57:21.103378    3528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1210 05:57:21.103378    3528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1210 05:57:21.103378    3528 command_runner.go:130] > TasksMax=infinity
	I1210 05:57:21.103378    3528 command_runner.go:130] > TimeoutStartSec=0
	I1210 05:57:21.103378    3528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1210 05:57:21.103378    3528 command_runner.go:130] > Delegate=yes
	I1210 05:57:21.103378    3528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1210 05:57:21.103378    3528 command_runner.go:130] > KillMode=process
	I1210 05:57:21.103378    3528 command_runner.go:130] > OOMScoreAdjust=-500
	I1210 05:57:21.103378    3528 command_runner.go:130] > [Install]
	I1210 05:57:21.103378    3528 command_runner.go:130] > WantedBy=multi-user.target
	I1210 05:57:21.111084    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 05:57:21.134007    3528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 05:57:21.193270    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 05:57:21.218062    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 05:57:21.240026    3528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:57:21.262345    3528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1210 05:57:21.267460    3528 ssh_runner.go:195] Run: which cri-dockerd
	I1210 05:57:21.274915    3528 command_runner.go:130] > /usr/bin/cri-dockerd
	I1210 05:57:21.278860    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 05:57:21.290698    3528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 05:57:21.314565    3528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 05:57:21.466409    3528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 05:57:21.603844    3528 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 05:57:21.603844    3528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 05:57:21.630009    3528 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 05:57:21.650723    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:21.786633    3528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 05:57:22.595739    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:57:22.618130    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 05:57:22.639399    3528 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1210 05:57:22.666084    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 05:57:22.689760    3528 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 05:57:22.826287    3528 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 05:57:22.966482    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:23.147658    3528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 05:57:23.173945    3528 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 05:57:23.199471    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:23.338742    3528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 05:57:23.455945    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 05:57:23.474438    3528 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 05:57:23.478444    3528 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 05:57:23.486000    3528 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1210 05:57:23.486000    3528 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 05:57:23.486000    3528 command_runner.go:130] > Device: 0,112	Inode: 1768        Links: 1
	I1210 05:57:23.486000    3528 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1210 05:57:23.486000    3528 command_runner.go:130] > Access: 2025-12-10 05:57:23.343769342 +0000
	I1210 05:57:23.486000    3528 command_runner.go:130] > Modify: 2025-12-10 05:57:23.343769342 +0000
	I1210 05:57:23.486000    3528 command_runner.go:130] > Change: 2025-12-10 05:57:23.343769342 +0000
	I1210 05:57:23.486000    3528 command_runner.go:130] >  Birth: -
	I1210 05:57:23.486000    3528 start.go:564] Will wait 60s for crictl version
	I1210 05:57:23.490664    3528 ssh_runner.go:195] Run: which crictl
	I1210 05:57:23.496443    3528 command_runner.go:130] > /usr/local/bin/crictl
	I1210 05:57:23.501067    3528 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 05:57:23.549049    3528 command_runner.go:130] > Version:  0.1.0
	I1210 05:57:23.549049    3528 command_runner.go:130] > RuntimeName:  docker
	I1210 05:57:23.549049    3528 command_runner.go:130] > RuntimeVersion:  29.1.2
	I1210 05:57:23.549049    3528 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 05:57:23.549049    3528 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 05:57:23.552780    3528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 05:57:23.592051    3528 command_runner.go:130] > 29.1.2
	I1210 05:57:23.595007    3528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 05:57:23.630739    3528 command_runner.go:130] > 29.1.2
	I1210 05:57:23.635076    3528 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.2 ...
	I1210 05:57:23.638761    3528 cli_runner.go:164] Run: docker exec -t functional-871500 dig +short host.docker.internal
	I1210 05:57:23.765960    3528 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 05:57:23.770487    3528 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 05:57:23.780262    3528 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1210 05:57:23.784121    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:23.838579    3528 kubeadm.go:884] updating cluster {Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:57:23.838579    3528 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 05:57:23.841570    3528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:57:23.871575    3528 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1210 05:57:23.871575    3528 docker.go:621] Images already preloaded, skipping extraction
	I1210 05:57:23.875579    3528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:57:23.907148    3528 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1210 05:57:23.907148    3528 cache_images.go:86] Images are preloaded, skipping loading
	I1210 05:57:23.907148    3528 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1210 05:57:23.907668    3528 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-871500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:57:23.911609    3528 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 05:57:23.978720    3528 command_runner.go:130] > cgroupfs
	I1210 05:57:23.983482    3528 cni.go:84] Creating CNI manager for ""
	I1210 05:57:23.983482    3528 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 05:57:23.983482    3528 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:57:23.983482    3528 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-871500 NodeName:functional-871500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:57:23.983482    3528 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-871500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:57:23.987498    3528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 05:57:24.000182    3528 command_runner.go:130] > kubeadm
	I1210 05:57:24.000182    3528 command_runner.go:130] > kubectl
	I1210 05:57:24.000182    3528 command_runner.go:130] > kubelet
	I1210 05:57:24.000182    3528 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 05:57:24.004093    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:57:24.018408    3528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1210 05:57:24.041215    3528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 05:57:24.061272    3528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1210 05:57:24.082615    3528 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 05:57:24.095804    3528 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 05:57:24.101162    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:24.247994    3528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:57:24.548481    3528 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500 for IP: 192.168.49.2
	I1210 05:57:24.548481    3528 certs.go:195] generating shared ca certs ...
	I1210 05:57:24.549012    3528 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:57:24.549698    3528 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 05:57:24.549774    3528 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 05:57:24.549774    3528 certs.go:257] generating profile certs ...
	I1210 05:57:24.550590    3528 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\client.key
	I1210 05:57:24.550932    3528 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key.53a949a1
	I1210 05:57:24.550932    3528 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key
	I1210 05:57:24.550932    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 05:57:24.550932    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1210 05:57:24.550932    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 05:57:24.551460    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 05:57:24.551604    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 05:57:24.551764    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 05:57:24.551869    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 05:57:24.552075    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 05:57:24.552075    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 05:57:24.552075    3528 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 05:57:24.552617    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 05:57:24.552685    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 05:57:24.552685    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 05:57:24.552685    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 05:57:24.553394    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 05:57:24.553588    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.553766    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem -> /usr/share/ca-certificates/11304.pem
	I1210 05:57:24.553869    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> /usr/share/ca-certificates/113042.pem
	I1210 05:57:24.554786    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:57:24.581958    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:57:24.609312    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:57:24.634601    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 05:57:24.661713    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 05:57:24.690256    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 05:57:24.717784    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:57:24.748075    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 05:57:24.779590    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:57:24.808619    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 05:57:24.838348    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 05:57:24.862790    3528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:57:24.888297    3528 ssh_runner.go:195] Run: openssl version
	I1210 05:57:24.898078    3528 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 05:57:24.902400    3528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.918304    3528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:57:24.936062    3528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.946045    3528 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.946080    3528 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.950017    3528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.993898    3528 command_runner.go:130] > b5213941
	I1210 05:57:24.999156    3528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:57:25.016159    3528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.034260    3528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 05:57:25.053147    3528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.061814    3528 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.061814    3528 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.065786    3528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.108176    3528 command_runner.go:130] > 51391683
	I1210 05:57:25.113321    3528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 05:57:25.129918    3528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.147630    3528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 05:57:25.167521    3528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.176125    3528 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.176125    3528 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.180991    3528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.223232    3528 command_runner.go:130] > 3ec20f2e
	I1210 05:57:25.227937    3528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 05:57:25.244300    3528 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:57:25.251407    3528 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:57:25.251407    3528 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 05:57:25.251407    3528 command_runner.go:130] > Device: 8,48	Inode: 15342       Links: 1
	I1210 05:57:25.251407    3528 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 05:57:25.251407    3528 command_runner.go:130] > Access: 2025-12-10 05:53:12.664767007 +0000
	I1210 05:57:25.251407    3528 command_runner.go:130] > Modify: 2025-12-10 05:49:10.064289884 +0000
	I1210 05:57:25.251407    3528 command_runner.go:130] > Change: 2025-12-10 05:49:10.064289884 +0000
	I1210 05:57:25.251407    3528 command_runner.go:130] >  Birth: 2025-12-10 05:49:10.064289884 +0000
	I1210 05:57:25.255353    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 05:57:25.300587    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.306046    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 05:57:25.348642    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.354977    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 05:57:25.399294    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.403503    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 05:57:25.448300    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.453152    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 05:57:25.506357    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.511028    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 05:57:25.553903    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.554908    3528 kubeadm.go:401] StartCluster: {Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:57:25.558842    3528 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 05:57:25.593738    3528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:57:25.607577    3528 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 05:57:25.607628    3528 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 05:57:25.607628    3528 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 05:57:25.607628    3528 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 05:57:25.607628    3528 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 05:57:25.611091    3528 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 05:57:25.623212    3528 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:57:25.626623    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:25.680358    3528 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-871500" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:25.681186    3528 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-871500" cluster setting kubeconfig missing "functional-871500" context setting]
	I1210 05:57:25.681273    3528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:57:25.700123    3528 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:25.700864    3528 kapi.go:59] client config for functional-871500: &rest.Config{Host:"https://127.0.0.1:50086", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff61ff39080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 05:57:25.702157    3528 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 05:57:25.702219    3528 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 05:57:25.702289    3528 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 05:57:25.702331    3528 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 05:57:25.702331    3528 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 05:57:25.702331    3528 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 05:57:25.706500    3528 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 05:57:25.721533    3528 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1210 05:57:25.721533    3528 kubeadm.go:602] duration metric: took 113.9037ms to restartPrimaryControlPlane
	I1210 05:57:25.721533    3528 kubeadm.go:403] duration metric: took 166.6224ms to StartCluster
	I1210 05:57:25.721533    3528 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:57:25.721533    3528 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:25.722880    3528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:57:25.723468    3528 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 05:57:25.723468    3528 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 05:57:25.723468    3528 addons.go:70] Setting storage-provisioner=true in profile "functional-871500"
	I1210 05:57:25.723468    3528 addons.go:70] Setting default-storageclass=true in profile "functional-871500"
	I1210 05:57:25.723468    3528 addons.go:239] Setting addon storage-provisioner=true in "functional-871500"
	I1210 05:57:25.723990    3528 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-871500"
	I1210 05:57:25.723990    3528 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 05:57:25.724039    3528 host.go:66] Checking if "functional-871500" exists ...
	I1210 05:57:25.727290    3528 out.go:179] * Verifying Kubernetes components...
	I1210 05:57:25.732528    3528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:57:25.733215    3528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:57:25.733847    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:25.784477    3528 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:25.784477    3528 kapi.go:59] client config for functional-871500: &rest.Config{Host:"https://127.0.0.1:50086", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff61ff39080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 05:57:25.785479    3528 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 05:57:25.785479    3528 addons.go:239] Setting addon default-storageclass=true in "functional-871500"
	I1210 05:57:25.785479    3528 host.go:66] Checking if "functional-871500" exists ...
	I1210 05:57:25.792481    3528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:57:25.809483    3528 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:57:25.812486    3528 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:25.812486    3528 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 05:57:25.815477    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:25.843475    3528 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:25.843475    3528 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 05:57:25.846475    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:25.863476    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:25.889481    3528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:57:25.893492    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:25.997793    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:26.023732    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:26.053186    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:26.112921    3528 node_ready.go:35] waiting up to 6m0s for node "functional-871500" to be "Ready" ...
	I1210 05:57:26.112921    3528 type.go:168] "Request Body" body=""
	I1210 05:57:26.113457    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:26.116638    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:26.133091    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.136407    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.136407    3528 retry.go:31] will retry after 345.217772ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.150366    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.202827    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.202827    3528 retry.go:31] will retry after 151.034764ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.359087    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:26.431671    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.436291    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.436291    3528 retry.go:31] will retry after 206.058838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.486383    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:26.557721    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.560620    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.560620    3528 retry.go:31] will retry after 499.995799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.648783    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:26.718122    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.721048    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.721048    3528 retry.go:31] will retry after 393.754282ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.063815    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:27.116921    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:27.116921    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:27.119587    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:27.119858    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:27.142617    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:27.145831    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.145969    3528 retry.go:31] will retry after 468.483229ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.204933    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:27.208432    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.208432    3528 retry.go:31] will retry after 855.193396ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.619421    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:27.706849    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:27.710739    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.710739    3528 retry.go:31] will retry after 912.738336ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:28.069754    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:28.120644    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:28.120644    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:28.123531    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:28.143254    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:28.148927    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:28.148927    3528 retry.go:31] will retry after 983.332816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:28.628567    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:28.701176    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:28.706795    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:28.706795    3528 retry.go:31] will retry after 1.385287928s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:29.123599    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:29.123599    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:29.126305    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:29.136958    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:29.206724    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:29.211387    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:29.211387    3528 retry.go:31] will retry after 1.736840395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:30.096718    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:30.126845    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:30.126845    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:30.129697    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:30.181502    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:30.186062    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:30.186111    3528 retry.go:31] will retry after 1.361370091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:30.954728    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:31.028355    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:31.034556    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:31.034556    3528 retry.go:31] will retry after 1.491617713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:31.130593    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:31.130593    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:31.133462    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:31.553535    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:31.628770    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:31.634748    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:31.634748    3528 retry.go:31] will retry after 3.561022392s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:32.134739    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:32.134739    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:32.138071    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:32.531847    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:32.611685    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:32.617246    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:32.617246    3528 retry.go:31] will retry after 5.95380248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:33.138488    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:33.138875    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:33.141787    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:34.142311    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:34.142734    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:34.145176    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:35.146145    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:35.146145    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:35.148924    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:35.201546    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:35.276874    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:35.281183    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:35.281183    3528 retry.go:31] will retry after 3.730531418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:36.149846    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:36.149846    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:36.152788    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 05:57:36.152788    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:57:36.152788    3528 type.go:168] "Request Body" body=""
	I1210 05:57:36.152788    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:36.155425    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:37.155901    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:37.155901    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:37.159513    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:38.161109    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:38.161109    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:38.164724    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:38.577263    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:38.649489    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:38.652783    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:38.652883    3528 retry.go:31] will retry after 3.457172569s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:39.016926    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:39.102009    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:39.106825    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:39.106825    3528 retry.go:31] will retry after 7.958311304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:39.165052    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:39.165052    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:39.167612    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:40.168385    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:40.168385    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:40.171568    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:41.172124    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:41.172124    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:41.175998    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:42.114835    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:42.176733    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:42.176733    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:42.179377    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:42.194232    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:42.198994    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:42.198994    3528 retry.go:31] will retry after 11.400414998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:43.179774    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:43.179774    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:43.182962    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:44.183364    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:44.183364    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:44.186385    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:45.186936    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:45.187376    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:45.189591    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:46.190096    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:46.190096    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:46.196158    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	W1210 05:57:46.196158    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:57:46.196158    3528 type.go:168] "Request Body" body=""
	I1210 05:57:46.196158    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:46.198622    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:47.071512    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:47.150023    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:47.153571    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:47.153571    3528 retry.go:31] will retry after 8.685329621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:47.199356    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:47.199356    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:47.202855    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:48.203136    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:48.203136    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:48.209086    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1210 05:57:49.209940    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:49.209940    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:49.213512    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:50.214412    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:50.214412    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:50.218493    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:57:51.219009    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:51.219009    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:51.221689    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:52.221931    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:52.221931    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:52.224876    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:53.225848    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:53.225848    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:53.229481    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:53.604916    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:53.684553    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:53.688941    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:53.688941    3528 retry.go:31] will retry after 15.037235136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:54.230291    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:54.230291    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:54.233031    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:55.233749    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:55.233749    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:55.236864    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:55.845563    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:55.917684    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:55.920989    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:55.920989    3528 retry.go:31] will retry after 14.528574699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:56.237162    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:56.237162    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:56.240358    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:57:56.240358    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:57:56.240358    3528 type.go:168] "Request Body" body=""
	I1210 05:57:56.240358    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:56.242693    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:57.243108    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:57.243108    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:57.246459    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:58.247768    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:58.248150    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:58.251587    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:59.252608    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:59.252608    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:59.255751    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:00.256340    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:00.256340    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:00.259424    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:01.260417    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:01.260417    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:01.263835    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:02.264658    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:02.264976    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:02.268894    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:03.269646    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:03.270040    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:03.272742    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:04.273295    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:04.273295    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:04.276636    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:05.277239    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:05.277639    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:05.280629    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:06.281483    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:06.281483    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:06.285745    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1210 05:58:06.285802    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:06.285840    3528 type.go:168] "Request Body" body=""
	I1210 05:58:06.285987    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:06.288564    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:07.289127    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:07.289127    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:07.292563    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:08.293072    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:08.293072    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:08.297241    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:58:08.732392    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:58:08.811298    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:08.814895    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:08.814895    3528 retry.go:31] will retry after 24.059893548s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:09.297667    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:09.297667    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:09.300824    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:10.301402    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:10.301402    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:10.304411    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:10.455124    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:58:10.546239    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:10.546239    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:10.546239    3528 retry.go:31] will retry after 31.876597574s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:11.304978    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:11.304978    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:11.308149    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:12.308734    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:12.308734    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:12.311812    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:13.312561    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:13.313241    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:13.316204    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:14.317485    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:14.317883    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:14.320038    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:15.320460    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:15.320460    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:15.323420    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:16.323723    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:16.323723    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:16.326977    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:16.326977    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:16.327139    3528 type.go:168] "Request Body" body=""
	I1210 05:58:16.327227    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:16.329681    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:17.330932    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:17.330932    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:17.333882    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:18.334334    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:18.334798    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:18.338144    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:19.338534    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:19.338534    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:19.342989    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:58:20.343612    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:20.343612    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:20.346805    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:21.347681    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:21.347681    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:21.350863    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:22.351290    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:22.351290    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:22.354536    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:23.355239    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:23.355239    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:23.358499    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:24.359467    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:24.359467    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:24.364653    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1210 05:58:25.365025    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:25.365025    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:25.368433    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:26.369056    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:26.369056    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:26.372426    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:26.372457    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:26.372457    3528 type.go:168] "Request Body" body=""
	I1210 05:58:26.372457    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:26.374640    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:27.375624    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:27.375624    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:27.379448    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:28.380744    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:28.380744    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:28.384412    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:29.385100    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:29.385455    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:29.388161    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:30.388490    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:30.388490    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:30.391842    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:31.392294    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:31.392294    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:31.395842    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:32.397016    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:32.397016    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:32.399019    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:32.881902    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:58:32.967281    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:32.972519    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:32.972519    3528 retry.go:31] will retry after 41.610684516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:33.399525    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:33.399525    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:33.402804    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:34.403496    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:34.403496    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:34.406699    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:35.406992    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:35.406992    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:35.410007    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:36.410696    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:36.410696    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:36.414578    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:36.414673    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:36.414815    3528 type.go:168] "Request Body" body=""
	I1210 05:58:36.414864    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:36.417495    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:37.417917    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:37.418702    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:37.421367    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:38.421905    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:38.421905    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:38.424630    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:39.425767    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:39.426355    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:39.429576    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:40.429801    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:40.429801    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:40.433301    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:41.433959    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:41.433959    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:41.437621    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:42.429097    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:58:42.438217    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:42.438429    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:42.440917    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:42.509794    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:42.514955    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:42.515232    3528 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 05:58:43.441740    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:43.441740    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:43.444947    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:44.445672    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:44.445672    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:44.449361    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:45.449616    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:45.450071    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:45.452940    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:46.454145    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:46.454503    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:46.458078    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:46.458078    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:46.458078    3528 type.go:168] "Request Body" body=""
	I1210 05:58:46.458078    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:46.460173    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:47.460277    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:47.460277    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:47.462994    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:48.463438    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:48.463438    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:48.466303    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:49.467359    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:49.467359    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:49.471033    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:50.471353    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:50.471932    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:50.474800    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:51.475228    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:51.475228    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:51.478898    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:52.479596    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:52.479596    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:52.483072    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:53.483188    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:53.483188    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:53.486888    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:54.487194    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:54.487194    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:54.489701    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:55.490295    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:55.490295    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:55.494381    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:58:56.495339    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:56.495339    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:56.498522    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:56.498627    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:56.498685    3528 type.go:168] "Request Body" body=""
	I1210 05:58:56.498685    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:56.501262    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:57.501619    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:57.501619    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:57.504510    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:58.504988    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:58.504988    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:58.508232    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:59.508716    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:59.508952    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:59.512196    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:00.512876    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:00.512876    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:00.516262    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:01.516926    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:01.516926    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:01.519996    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:02.520867    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:02.520867    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:02.524453    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:03.525204    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:03.525204    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:03.528417    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:04.528800    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:04.528800    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:04.532496    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:05.533449    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:05.533449    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:05.535518    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:06.535694    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:06.535694    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:06.538826    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:06.538826    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:06.538826    3528 type.go:168] "Request Body" body=""
	I1210 05:59:06.538826    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:06.541732    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:07.542096    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:07.542096    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:07.546090    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:08.546686    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:08.546686    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:08.550042    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:09.551069    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:09.551069    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:09.554772    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:10.555918    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:10.556223    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:10.558373    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:11.559619    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:11.559619    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:11.562909    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:12.563393    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:12.563393    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:12.566949    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:13.567538    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:13.567538    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:13.570615    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:14.571357    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:14.571869    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:14.574910    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:14.588699    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:59:14.659982    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:59:14.662951    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:59:14.662951    3528 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 05:59:14.667272    3528 out.go:179] * Enabled addons: 
	I1210 05:59:14.669291    3528 addons.go:530] duration metric: took 1m48.9444759s for enable addons: enabled=[]
	I1210 05:59:15.575548    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:15.575548    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:15.577957    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:16.578269    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:16.578269    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:16.581535    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:16.581626    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:16.581709    3528 type.go:168] "Request Body" body=""
	I1210 05:59:16.581757    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:16.584351    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:17.585087    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:17.585598    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:17.587811    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:18.588817    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:18.588817    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:18.593150    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:59:19.593863    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:19.593863    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:19.596290    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:20.596979    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:20.597284    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:20.600249    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:21.600500    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:21.600500    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:21.603751    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:22.603880    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:22.603880    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:22.608748    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:59:23.609127    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:23.609447    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:23.612322    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:24.613043    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:24.613043    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:24.616893    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:25.617546    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:25.617895    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:25.620726    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:26.620874    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:26.621261    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:26.624539    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:26.624539    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:26.624539    3528 type.go:168] "Request Body" body=""
	I1210 05:59:26.624539    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:26.627913    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:27.628729    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:27.628729    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:27.631708    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:28.632003    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:28.632003    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:28.635112    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:29.636254    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:29.636254    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:29.640073    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:30.640567    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:30.640567    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:30.643449    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:31.644603    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:31.644603    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:31.648321    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:32.648642    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:32.649007    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:32.651241    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:33.652555    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:33.652555    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:33.655647    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:34.656445    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:34.656445    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:34.659525    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:35.660470    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:35.660769    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:35.663511    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:36.663841    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:36.663841    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:36.667272    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:36.667368    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:36.667437    3528 type.go:168] "Request Body" body=""
	I1210 05:59:36.667551    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:36.671515    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:37.671899    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:37.671899    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:37.676101    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:59:38.676473    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:38.676473    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:38.679323    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:39.679890    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:39.679890    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:39.682898    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:40.683418    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:40.683418    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:40.687065    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:41.687380    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:41.687380    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:41.690398    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:42.691292    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:42.691292    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:42.693967    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:43.694336    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:43.694336    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:43.697547    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:44.697757    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:44.697757    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:44.700896    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:45.701213    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:45.701677    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:45.704167    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:46.704767    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:46.705237    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:46.708460    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:46.709023    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:46.709124    3528 type.go:168] "Request Body" body=""
	I1210 05:59:46.709198    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:46.711537    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:47.711814    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:47.712045    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:47.715217    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:48.716360    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:48.716360    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:48.719060    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:49.719847    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:49.719847    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:49.723779    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:50.724269    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:50.724269    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:50.728439    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:59:51.729126    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:51.729126    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:51.732791    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:52.734110    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:52.734110    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:52.738074    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:53.738271    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:53.738271    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:53.741809    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:54.742174    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:54.742174    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:54.746052    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:55.747079    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:55.747079    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:55.750285    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:56.750719    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:56.750719    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:56.753273    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 05:59:56.753273    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:56.753273    3528 type.go:168] "Request Body" body=""
	I1210 05:59:56.753273    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:56.755741    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:57.757283    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:57.757592    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:57.759856    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:58.761013    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:58.761013    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:58.764032    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:59.764386    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:59.764386    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:59.767579    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:00.767741    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:00.767741    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:00.771607    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:01.771831    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:01.771831    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:01.775356    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:02.775642    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:02.775642    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:02.779145    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:03.779411    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:03.779411    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:03.783151    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:04.783296    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:04.783296    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:04.786762    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:05.787153    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:05.787153    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:05.790518    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:06.790834    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:06.790834    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:06.794128    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:06.794128    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:06.794128    3528 type.go:168] "Request Body" body=""
	I1210 06:00:06.794660    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:06.796765    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:07.797318    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:07.797318    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:07.800177    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:08.801465    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:08.801465    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:08.804595    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:09.805061    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:09.805401    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:09.807835    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:10.808649    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:10.808991    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:10.811366    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:00:11.811812    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:11.811812    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:11.815185    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:12.815710    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:12.815710    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:12.819741    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:13.820205    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:13.820482    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:13.823243    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:14.823451    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:14.823451    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:14.826552    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:15.827102    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:15.827102    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:15.830239    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:16.830899    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:16.830899    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:16.833829    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:00:16.833829    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:16.834466    3528 type.go:168] "Request Body" body=""
	I1210 06:00:16.834489    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:16.836240    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:00:17.836565    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:17.836565    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:17.840343    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:18.840710    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:18.841040    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:18.844672    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:19.845082    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:19.845358    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:19.846852    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:00:20.848265    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:20.848265    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:20.851784    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:21.852223    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:21.852223    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:21.855023    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:22.856027    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:22.856027    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:22.859873    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:23.860923    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:23.860923    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:23.864261    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:24.864916    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:24.864916    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:24.868305    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:25.869078    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:25.869078    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:25.871509    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:26.871824    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:26.871824    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:26.875349    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:26.875349    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:26.875881    3528 type.go:168] "Request Body" body=""
	I1210 06:00:26.876020    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:26.878104    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:27.878306    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:27.878306    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:27.881296    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:28.882571    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:28.882571    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:28.885774    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:29.885982    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:29.885982    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:29.889065    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:30.889836    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:30.889836    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:30.892524    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:31.893215    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:31.893546    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:31.895912    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:32.897363    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:32.897363    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:32.900093    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:33.900778    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:33.900778    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:33.903568    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:34.904276    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:34.904276    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:34.907470    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:35.909284    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:35.909284    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:35.912316    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:36.913041    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:36.913041    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:36.916114    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:36.916643    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:36.916788    3528 type.go:168] "Request Body" body=""
	I1210 06:00:36.916788    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:36.918746    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:00:37.918978    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:37.918978    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:37.922454    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:38.922847    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:38.923075    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:38.926196    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:39.926491    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:39.926491    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:39.929932    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:40.930368    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:40.930368    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:40.934200    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:41.934738    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:41.934738    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:41.938126    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:42.939113    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:42.939113    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:42.941791    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:43.941991    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:43.942311    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:43.945177    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:44.945677    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:44.945677    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:44.949097    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:45.949865    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:45.950099    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:45.953257    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:46.953679    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:46.953679    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:46.957085    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:46.957085    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:46.957085    3528 type.go:168] "Request Body" body=""
	I1210 06:00:46.957085    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:46.959580    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:47.960148    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:47.960373    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:47.963579    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:48.964463    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:48.964463    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:48.967395    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:49.967782    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:49.967782    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:49.970748    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:50.971566    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:50.971566    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:50.974845    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:51.975483    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:51.976062    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:51.980347    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:00:52.980545    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:52.980545    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:52.983731    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:53.984073    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:53.984391    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:53.987349    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:54.988244    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:54.988244    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:54.992602    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:00:55.993170    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:55.993170    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:55.996092    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:56.996214    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:56.996214    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:56.999523    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:56.999523    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:57.000058    3528 type.go:168] "Request Body" body=""
	I1210 06:00:57.000148    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:57.002201    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:58.003156    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:58.003156    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:58.005615    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:59.006304    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:59.006304    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:59.009503    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:00.010519    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:00.010519    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:00.013059    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:01.013184    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:01.013184    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:01.017608    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:02.018033    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:02.018033    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:02.021448    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:03.022254    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:03.022604    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:03.025475    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:04.026637    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:04.026637    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:04.029792    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:05.030057    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:05.030057    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:05.033922    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:06.034438    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:06.034438    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:06.037480    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:07.038283    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:07.038283    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:07.041280    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:01:07.041328    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:07.041532    3528 type.go:168] "Request Body" body=""
	I1210 06:01:07.041606    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:07.044121    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:08.044522    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:08.044522    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:08.048047    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:09.048331    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:09.048331    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:09.051118    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:10.051651    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:10.051929    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:10.054948    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:11.055145    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:11.055564    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:11.058295    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:12.059200    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:12.059345    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:12.061763    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:13.062357    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:13.062357    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:13.067157    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:14.068007    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:14.068443    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:14.071405    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:15.071610    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:15.071610    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:15.075149    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:16.075929    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:16.075929    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:16.078363    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:17.078629    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:17.078629    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:17.082263    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:01:17.082399    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:17.082476    3528 type.go:168] "Request Body" body=""
	I1210 06:01:17.082601    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:17.084577    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:01:18.085283    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:18.085283    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:18.087761    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:19.089284    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:19.089284    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:19.093369    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:20.094032    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:20.094032    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:20.097108    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:21.097562    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:21.097562    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:21.104228    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	I1210 06:01:22.104512    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:22.104512    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:22.106967    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:23.107603    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:23.107603    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:23.110798    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:24.111778    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:24.111778    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:24.114416    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:25.115471    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:25.115471    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:25.118129    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:26.118485    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:26.118485    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:26.121278    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:27.121884    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:27.121884    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:27.125182    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:01:27.125182    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:27.125182    3528 type.go:168] "Request Body" body=""
	I1210 06:01:27.125182    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:27.127600    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:28.128000    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:28.128000    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:28.131773    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:29.132042    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:29.132453    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:29.135795    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:30.136052    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:30.136052    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:30.140250    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:31.140497    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:31.140975    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:31.143389    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:32.143469    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:32.144131    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:32.148568    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:33.148831    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:33.148831    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:33.152129    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:34.152786    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:34.152786    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:34.156156    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:35.156429    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:35.156429    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:35.159806    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:36.160061    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:36.160061    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:36.163126    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:37.163591    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:37.163591    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:37.166938    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:01:37.166938    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:37.167463    3528 type.go:168] "Request Body" body=""
	I1210 06:01:37.167518    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:37.169655    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:38.169997    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:38.169997    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:38.173075    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:39.173563    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:39.173563    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:39.177056    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:40.177923    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:40.177923    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:40.181566    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:41.182378    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:41.182378    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:41.185302    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:42.185967    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:42.185967    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:42.188700    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:43.189505    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:43.189505    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:43.192705    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:44.193063    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:44.193560    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:44.195918    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:45.196717    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:45.196717    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:45.200077    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:46.200329    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:46.200329    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:46.203250    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:47.204114    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:47.204114    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:47.206151    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:01:47.206151    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:47.206151    3528 type.go:168] "Request Body" body=""
	I1210 06:01:47.206692    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:47.209053    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:48.209387    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:48.209387    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:48.213313    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:49.213608    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:49.213608    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:49.217045    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:50.217195    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:50.217195    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:50.220141    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:51.220422    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:51.220422    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:51.223771    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:52.224601    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:52.224601    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:52.227794    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:53.228750    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:53.228750    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:53.231412    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:54.232114    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:54.232114    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:54.235027    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:55.235579    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:55.235983    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:55.238624    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:56.239321    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:56.239321    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:56.241809    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:57.242257    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:57.242257    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:57.245969    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:01:57.245969    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:57.245969    3528 type.go:168] "Request Body" body=""
	I1210 06:01:57.245969    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:57.248410    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:01:58.249059    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:58.249059    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:58.252337    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:59.252782    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:59.253339    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:59.255908    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:00.256663    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:00.257161    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:00.259603    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:01.260700    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:01.260700    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:01.263908    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:02.263994    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:02.264404    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:02.267730    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:03.268305    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:03.268305    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:03.271419    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:04.271604    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:04.271604    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:04.274704    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:05.275664    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:05.275664    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:05.278947    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:06.280127    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:06.280127    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:06.283728    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:07.284100    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:07.284100    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:07.286782    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:07.286782    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:07.287315    3528 type.go:168] "Request Body" body=""
	I1210 06:02:07.287315    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:07.289712    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:08.290003    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:08.290003    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:08.293335    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:09.293835    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:09.293835    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:09.296504    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:10.296683    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:10.296683    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:10.299600    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:11.300202    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:11.300202    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:11.303557    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:12.305092    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:12.305092    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:12.307542    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:13.308588    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:13.308588    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:13.312484    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:14.312766    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:14.312766    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:14.316277    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:15.317454    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:15.317454    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:15.320383    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:16.320913    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:16.320913    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:16.323576    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:17.323813    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:17.323813    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:17.326985    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:02:17.326985    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:17.326985    3528 type.go:168] "Request Body" body=""
	I1210 06:02:17.326985    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:17.329581    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:18.330187    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:18.330187    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:18.332737    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:19.333031    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:19.333031    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:19.335030    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:02:20.336555    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:20.336555    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:20.339555    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:21.340558    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:21.340558    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:21.342929    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:22.343239    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:22.343724    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:22.346810    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:23.347387    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:23.347387    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:23.350241    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:24.350796    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:24.350796    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:24.353724    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:25.354434    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:25.354772    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:25.357575    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:26.358016    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:26.358016    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:26.361246    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:27.362131    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:27.362479    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:27.365230    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:27.365813    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:27.365813    3528 type.go:168] "Request Body" body=""
	I1210 06:02:27.365813    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:27.368828    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:28.369580    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:28.369580    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:28.372320    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:29.372897    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:29.372897    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:29.376660    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:30.377760    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:30.377760    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:30.380415    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:31.381897    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:31.381897    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:31.385100    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:32.385291    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:32.385291    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:32.387374    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:33.389360    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:33.389360    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:33.393116    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:34.393502    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:34.393502    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:34.396152    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:35.396913    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:35.396913    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:35.401573    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:02:36.402190    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:36.402534    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:36.404711    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:37.405859    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:37.405859    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:37.408704    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:37.408838    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:37.408985    3528 type.go:168] "Request Body" body=""
	I1210 06:02:37.409077    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:37.412442    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:38.413079    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:38.413079    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:38.416332    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:39.416603    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:39.416603    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:39.420060    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:40.420482    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:40.420482    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:40.424152    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:41.424439    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:41.424439    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:41.427960    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:42.428547    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:42.428547    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:42.433716    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1210 06:02:43.434760    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:43.434760    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:43.437305    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:44.437929    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:44.437929    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:44.441911    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:45.442598    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:45.442598    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:45.445386    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:46.445563    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:46.445958    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:46.449188    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:47.450213    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:47.450868    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:47.453841    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:47.453841    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:47.453841    3528 type.go:168] "Request Body" body=""
	I1210 06:02:47.453841    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:47.457634    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:48.457929    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:48.457929    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:48.461148    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:49.461572    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:49.461572    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:49.464368    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:50.465569    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:50.465956    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:50.468785    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:51.469079    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:51.469079    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:51.473246    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:02:52.473693    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:52.473693    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:52.477423    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:53.477937    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:53.477937    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:53.481938    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:02:54.482839    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:54.482839    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:54.485813    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:55.486892    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:55.486892    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:55.490131    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:56.490554    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:56.490554    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:56.493887    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:57.494861    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:57.494861    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:57.497800    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:57.497800    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:57.497998    3528 type.go:168] "Request Body" body=""
	I1210 06:02:57.498076    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:57.500781    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:58.501021    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:58.501021    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:58.504136    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:59.504488    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:59.504969    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:59.507730    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:00.508009    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:00.508009    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:00.511476    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:01.512344    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:01.512344    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:01.515549    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:02.516467    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:02.516467    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:02.520405    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:03.520921    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:03.521256    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:03.524252    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:04.524513    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:04.524953    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:04.527628    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:05.529050    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:05.529050    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:05.536803    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=7
	I1210 06:03:06.537822    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:06.537822    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:06.541195    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:07.541552    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:07.541552    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:07.544874    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:03:07.544874    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:03:07.544874    3528 type.go:168] "Request Body" body=""
	I1210 06:03:07.544874    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:07.548078    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:08.548780    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:08.548969    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:08.551745    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:09.552670    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:09.552670    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:09.556239    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:10.556550    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:10.556906    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:10.559896    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:11.560632    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:11.560632    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:11.563477    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:12.564335    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:12.564335    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:12.567101    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:13.567254    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:13.567254    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:13.570684    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:14.571214    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:14.571214    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:14.573567    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:15.574056    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:15.574401    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:15.577034    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:16.577296    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:16.577296    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:16.580507    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:17.580670    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:17.580670    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:17.584345    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:03:17.584442    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:03:17.584620    3528 type.go:168] "Request Body" body=""
	I1210 06:03:17.584714    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:17.586766    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:18.587485    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:18.587485    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:18.590661    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:19.591695    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:19.592099    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:19.594643    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:20.595361    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:20.595361    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:20.597940    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:21.598595    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:21.598595    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:21.601244    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:22.601730    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:22.601730    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:22.604442    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:23.605664    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:23.605664    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:23.608404    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:24.609206    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:24.609206    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:24.612484    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:25.613066    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:25.613066    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:25.615998    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:03:26.117891    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1210 06:03:26.117891    3528 node_ready.go:38] duration metric: took 6m0.0004685s for node "functional-871500" to be "Ready" ...
	I1210 06:03:26.123026    3528 out.go:203] 
	W1210 06:03:26.125419    3528 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 06:03:26.125419    3528 out.go:285] * 
	W1210 06:03:26.127475    3528 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:03:26.130878    3528 out.go:203] 
	
	
	==> Docker <==
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.483189206Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.483194507Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.483214008Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.483249911Z" level=info msg="Initializing buildkit"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.582637464Z" level=info msg="Completed buildkit initialization"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.589253381Z" level=info msg="Daemon has completed initialization"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.589392791Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.589467497Z" level=info msg="API listen on [::]:2376"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.589490799Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 05:57:22 functional-871500 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 05:57:22 functional-871500 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 05:57:22 functional-871500 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 10 05:57:22 functional-871500 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 10 05:57:23 functional-871500 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Loaded network plugin cni"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 05:57:23 functional-871500 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:05:36.272914   20418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:05:36.273846   20418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:05:36.274942   20418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:05:36.275939   20418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:05:36.277219   20418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001083] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001015] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000962] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000877] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001001] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 05:57] CPU: 2 PID: 55724 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000754] RIP: 0033:0x7fd067afcb20
	[  +0.000396] Code: Unable to access opcode bytes at RIP 0x7fd067afcaf6.
	[  +0.000673] RSP: 002b:00007ffe57c686d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000778] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000787] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000893] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000747] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000734] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000747] FS:  0000000000000000 GS:  0000000000000000
	[  +0.824990] CPU: 8 PID: 55850 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000805] RIP: 0033:0x7f91646e5b20
	[  +0.000401] Code: Unable to access opcode bytes at RIP 0x7f91646e5af6.
	[  +0.000653] RSP: 002b:00007ffe3817fb80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000798] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000820] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000756] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000769] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000760] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001067] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:05:36 up  1:33,  0 user,  load average: 0.41, 0.36, 0.60
	Linux functional-871500 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:05:32 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:05:33 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 985.
	Dec 10 06:05:33 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:05:33 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:05:33 functional-871500 kubelet[20250]: E1210 06:05:33.584852   20250 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:05:33 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:05:33 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:05:34 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 986.
	Dec 10 06:05:34 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:05:34 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:05:34 functional-871500 kubelet[20263]: E1210 06:05:34.337737   20263 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:05:34 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:05:34 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:05:35 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 987.
	Dec 10 06:05:35 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:05:35 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:05:35 functional-871500 kubelet[20292]: E1210 06:05:35.093604   20292 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:05:35 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:05:35 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:05:35 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 988.
	Dec 10 06:05:35 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:05:35 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:05:35 functional-871500 kubelet[20382]: E1210 06:05:35.833366   20382 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:05:35 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:05:35 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500: exit status 2 (613.632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-871500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (54.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (3.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:750: failed to link kubectl binary from out/minikube-windows-amd64.exe to out\kubectl.exe: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-871500
helpers_test.go:244: (dbg) docker inspect functional-871500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8",
	        "Created": "2025-12-10T05:48:42.330122465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43983,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:48:42.614396787Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hosts",
	        "LogPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8-json.log",
	        "Name": "/functional-871500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-871500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-871500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-871500",
	                "Source": "/var/lib/docker/volumes/functional-871500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-871500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-871500",
	                "name.minikube.sigs.k8s.io": "functional-871500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f831997cbc8b0c45b2bf0420afea78f7ce282bcb79cb347c18a4cbe1cbe8de10",
	            "SandboxKey": "/var/run/docker/netns/f831997cbc8b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50082"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50083"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50085"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-871500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e1a843817431186dff999677a77b90e1cb216c0695cd37f7ec217b50cc77b815",
	                    "EndpointID": "c10c5ce0711796e291e0a729879c86799e1ac37c35b14f1d57f23910383e1c22",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-871500",
	                        "47edd36e1aff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500: exit status 2 (602.5846ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 logs -n 25: (1.1650529s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-493600 image ls --format yaml --alsologtostderr                                                            │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ ssh     │ functional-493600 ssh pgrep buildkitd                                                                                 │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ image   │ functional-493600 image build -t localhost/my-image:functional-493600 testdata\build --alsologtostderr                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image   │ functional-493600 image ls                                                                                            │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image   │ functional-493600 image ls --format json --alsologtostderr                                                            │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ service │ functional-493600 service hello-node --url                                                                            │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ image   │ functional-493600 image ls --format table --alsologtostderr                                                           │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ delete  │ -p functional-493600                                                                                                  │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:48 UTC │ 10 Dec 25 05:48 UTC │
	│ start   │ -p functional-871500 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-rc.1 │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:48 UTC │                     │
	│ start   │ -p functional-871500 --alsologtostderr -v=8                                                                           │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:57 UTC │                     │
	│ cache   │ functional-871500 cache add registry.k8s.io/pause:3.1                                                                 │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ functional-871500 cache add registry.k8s.io/pause:3.3                                                                 │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ functional-871500 cache add registry.k8s.io/pause:latest                                                              │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ functional-871500 cache add minikube-local-cache-test:functional-871500                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ functional-871500 cache delete minikube-local-cache-test:functional-871500                                            │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                      │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ list                                                                                                                  │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo crictl images                                                                              │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo docker rmi registry.k8s.io/pause:latest                                                    │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │                     │
	│ cache   │ functional-871500 cache reload                                                                                        │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                      │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                   │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ kubectl │ functional-871500 kubectl -- --context functional-871500 get pods                                                     │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:57:16
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:57:16.875847    3528 out.go:360] Setting OutFile to fd 1624 ...
	I1210 05:57:16.917657    3528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:57:16.917657    3528 out.go:374] Setting ErrFile to fd 1612...
	I1210 05:57:16.917657    3528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:57:16.932616    3528 out.go:368] Setting JSON to false
	I1210 05:57:16.934770    3528 start.go:133] hostinfo: {"hostname":"minikube4","uptime":5168,"bootTime":1765341068,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 05:57:16.934770    3528 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 05:57:16.939605    3528 out.go:179] * [functional-871500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 05:57:16.942014    3528 notify.go:221] Checking for updates...
	I1210 05:57:16.946622    3528 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:16.950394    3528 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:57:16.952350    3528 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 05:57:16.955212    3528 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:57:16.957439    3528 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:57:16.962034    3528 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 05:57:16.962229    3528 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:57:17.077929    3528 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 05:57:17.082453    3528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:57:17.310960    3528 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-10 05:57:17.287646185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:57:17.314972    3528 out.go:179] * Using the docker driver based on existing profile
	I1210 05:57:17.316973    3528 start.go:309] selected driver: docker
	I1210 05:57:17.316973    3528 start.go:927] validating driver "docker" against &{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:57:17.316973    3528 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:57:17.322956    3528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:57:17.562979    3528 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-10 05:57:17.536373793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:57:17.650233    3528 cni.go:84] Creating CNI manager for ""
	I1210 05:57:17.650233    3528 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 05:57:17.650860    3528 start.go:353] cluster config:
	{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:57:17.654219    3528 out.go:179] * Starting "functional-871500" primary control-plane node in "functional-871500" cluster
	I1210 05:57:17.656244    3528 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 05:57:17.659128    3528 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:57:17.661459    3528 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:57:17.661459    3528 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 05:57:17.661583    3528 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1210 05:57:17.661583    3528 cache.go:65] Caching tarball of preloaded images
	I1210 05:57:17.661583    3528 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1210 05:57:17.662115    3528 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1210 05:57:17.662465    3528 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\config.json ...
	I1210 05:57:17.734611    3528 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 05:57:17.734611    3528 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 05:57:17.734611    3528 cache.go:243] Successfully downloaded all kic artifacts
	I1210 05:57:17.734611    3528 start.go:360] acquireMachinesLock for functional-871500: {Name:mkaa7072cf669cebcb93feb3e66bf80897472d33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:57:17.735277    3528 start.go:364] duration metric: took 104.4µs to acquireMachinesLock for "functional-871500"
	I1210 05:57:17.735336    3528 start.go:96] Skipping create...Using existing machine configuration
	I1210 05:57:17.735336    3528 fix.go:54] fixHost starting: 
	I1210 05:57:17.741445    3528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:57:17.794847    3528 fix.go:112] recreateIfNeeded on functional-871500: state=Running err=<nil>
	W1210 05:57:17.794847    3528 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 05:57:17.798233    3528 out.go:252] * Updating the running docker "functional-871500" container ...
	I1210 05:57:17.798233    3528 machine.go:94] provisionDockerMachine start ...
	I1210 05:57:17.802052    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:17.859397    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:17.860025    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:17.860025    3528 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 05:57:18.039007    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-871500
	
	I1210 05:57:18.039007    3528 ubuntu.go:182] provisioning hostname "functional-871500"
	I1210 05:57:18.043768    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:18.100666    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:18.100666    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:18.100666    3528 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-871500 && echo "functional-871500" | sudo tee /etc/hostname
	I1210 05:57:18.283797    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-871500
	
	I1210 05:57:18.287904    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:18.342863    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:18.343348    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:18.343409    3528 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-871500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-871500/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-871500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 05:57:18.533020    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:57:18.533020    3528 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 05:57:18.533020    3528 ubuntu.go:190] setting up certificates
	I1210 05:57:18.533020    3528 provision.go:84] configureAuth start
	I1210 05:57:18.537250    3528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 05:57:18.595140    3528 provision.go:143] copyHostCerts
	I1210 05:57:18.595839    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1210 05:57:18.596031    3528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 05:57:18.596062    3528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 05:57:18.596239    3528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 05:57:18.596845    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1210 05:57:18.597366    3528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 05:57:18.597406    3528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 05:57:18.597495    3528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 05:57:18.598291    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1210 05:57:18.598291    3528 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 05:57:18.598291    3528 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 05:57:18.598291    3528 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 05:57:18.599093    3528 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-871500 san=[127.0.0.1 192.168.49.2 functional-871500 localhost minikube]
	I1210 05:57:18.702479    3528 provision.go:177] copyRemoteCerts
	I1210 05:57:18.706176    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 05:57:18.709177    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:18.761464    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:18.886181    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1210 05:57:18.886181    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 05:57:18.914027    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1210 05:57:18.914027    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 05:57:18.939266    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1210 05:57:18.939794    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 05:57:18.968597    3528 provision.go:87] duration metric: took 435.5446ms to configureAuth
	I1210 05:57:18.968633    3528 ubuntu.go:206] setting minikube options for container-runtime
	I1210 05:57:18.969064    3528 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 05:57:18.972714    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:19.026843    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:19.027475    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:19.027475    3528 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 05:57:19.213570    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 05:57:19.213570    3528 ubuntu.go:71] root file system type: overlay
	I1210 05:57:19.213570    3528 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 05:57:19.217470    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:19.271762    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:19.271762    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:19.271762    3528 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 05:57:19.465304    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 05:57:19.469988    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:19.524496    3528 main.go:143] libmachine: Using SSH client type: native
	I1210 05:57:19.525153    3528 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 05:57:19.525153    3528 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 05:57:19.708281    3528 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 05:57:19.708281    3528 machine.go:97] duration metric: took 1.9100246s to provisionDockerMachine
	I1210 05:57:19.708281    3528 start.go:293] postStartSetup for "functional-871500" (driver="docker")
	I1210 05:57:19.708281    3528 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 05:57:19.712864    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 05:57:19.716356    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:19.769263    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:19.910607    3528 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 05:57:19.918702    3528 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 05:57:19.918702    3528 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 05:57:19.918702    3528 command_runner.go:130] > VERSION_ID="12"
	I1210 05:57:19.918702    3528 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 05:57:19.918702    3528 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 05:57:19.918702    3528 command_runner.go:130] > ID=debian
	I1210 05:57:19.918702    3528 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 05:57:19.918702    3528 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 05:57:19.918702    3528 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 05:57:19.918927    3528 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 05:57:19.919018    3528 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 05:57:19.919060    3528 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 05:57:19.919569    3528 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 05:57:19.919739    3528 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 05:57:19.919739    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> /etc/ssl/certs/113042.pem
	I1210 05:57:19.921060    3528 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts -> hosts in /etc/test/nested/copy/11304
	I1210 05:57:19.921102    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts -> /etc/test/nested/copy/11304/hosts
	I1210 05:57:19.926330    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11304
	I1210 05:57:19.937995    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 05:57:19.967462    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts --> /etc/test/nested/copy/11304/hosts (40 bytes)
	I1210 05:57:19.996671    3528 start.go:296] duration metric: took 288.3864ms for postStartSetup
	I1210 05:57:20.001220    3528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 05:57:20.004094    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:20.057975    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:20.183984    3528 command_runner.go:130] > 1%
	I1210 05:57:20.188612    3528 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 05:57:20.199532    3528 command_runner.go:130] > 950G
	I1210 05:57:20.200170    3528 fix.go:56] duration metric: took 2.4648044s for fixHost
	I1210 05:57:20.200170    3528 start.go:83] releasing machines lock for "functional-871500", held for 2.4648316s
	I1210 05:57:20.204329    3528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 05:57:20.260852    3528 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 05:57:20.265678    3528 ssh_runner.go:195] Run: cat /version.json
	I1210 05:57:20.265678    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:20.268055    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:20.318377    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:20.318938    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:20.440815    3528 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1210 05:57:20.440815    3528 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 05:57:20.448568    3528 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1210 05:57:20.452774    3528 ssh_runner.go:195] Run: systemctl --version
	I1210 05:57:20.464224    3528 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 05:57:20.464224    3528 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 05:57:20.469738    3528 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 05:57:20.478403    3528 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 05:57:20.478403    3528 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 05:57:20.483606    3528 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 05:57:20.495780    3528 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 05:57:20.495780    3528 start.go:496] detecting cgroup driver to use...
	I1210 05:57:20.495780    3528 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:57:20.495780    3528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:57:20.518759    3528 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1210 05:57:20.523282    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 05:57:20.541393    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1210 05:57:20.546364    3528 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 05:57:20.546364    3528 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 05:57:20.557861    3528 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 05:57:20.562880    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 05:57:20.580735    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:57:20.598803    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 05:57:20.615367    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 05:57:20.637025    3528 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 05:57:20.656757    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 05:57:20.676589    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 05:57:20.695912    3528 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 05:57:20.717653    3528 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 05:57:20.732788    3528 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 05:57:20.737410    3528 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 05:57:20.756411    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:20.908020    3528 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 05:57:21.078402    3528 start.go:496] detecting cgroup driver to use...
	I1210 05:57:21.078402    3528 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 05:57:21.083945    3528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 05:57:21.102632    3528 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1210 05:57:21.102632    3528 command_runner.go:130] > [Unit]
	I1210 05:57:21.102632    3528 command_runner.go:130] > Description=Docker Application Container Engine
	I1210 05:57:21.102632    3528 command_runner.go:130] > Documentation=https://docs.docker.com
	I1210 05:57:21.102632    3528 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1210 05:57:21.102632    3528 command_runner.go:130] > Wants=network-online.target containerd.service
	I1210 05:57:21.102632    3528 command_runner.go:130] > Requires=docker.socket
	I1210 05:57:21.102632    3528 command_runner.go:130] > StartLimitBurst=3
	I1210 05:57:21.102632    3528 command_runner.go:130] > StartLimitIntervalSec=60
	I1210 05:57:21.102632    3528 command_runner.go:130] > [Service]
	I1210 05:57:21.102632    3528 command_runner.go:130] > Type=notify
	I1210 05:57:21.102632    3528 command_runner.go:130] > Restart=always
	I1210 05:57:21.102632    3528 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1210 05:57:21.102632    3528 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1210 05:57:21.102632    3528 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1210 05:57:21.102632    3528 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1210 05:57:21.102632    3528 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1210 05:57:21.102632    3528 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1210 05:57:21.102632    3528 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1210 05:57:21.102632    3528 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1210 05:57:21.102632    3528 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1210 05:57:21.102632    3528 command_runner.go:130] > ExecStart=
	I1210 05:57:21.103158    3528 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1210 05:57:21.103158    3528 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1210 05:57:21.103158    3528 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1210 05:57:21.103158    3528 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1210 05:57:21.103158    3528 command_runner.go:130] > LimitNOFILE=infinity
	I1210 05:57:21.103158    3528 command_runner.go:130] > LimitNPROC=infinity
	I1210 05:57:21.103378    3528 command_runner.go:130] > LimitCORE=infinity
	I1210 05:57:21.103378    3528 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1210 05:57:21.103378    3528 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1210 05:57:21.103378    3528 command_runner.go:130] > TasksMax=infinity
	I1210 05:57:21.103378    3528 command_runner.go:130] > TimeoutStartSec=0
	I1210 05:57:21.103378    3528 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1210 05:57:21.103378    3528 command_runner.go:130] > Delegate=yes
	I1210 05:57:21.103378    3528 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1210 05:57:21.103378    3528 command_runner.go:130] > KillMode=process
	I1210 05:57:21.103378    3528 command_runner.go:130] > OOMScoreAdjust=-500
	I1210 05:57:21.103378    3528 command_runner.go:130] > [Install]
	I1210 05:57:21.103378    3528 command_runner.go:130] > WantedBy=multi-user.target
	I1210 05:57:21.111084    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 05:57:21.134007    3528 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 05:57:21.193270    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 05:57:21.218062    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 05:57:21.240026    3528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 05:57:21.262345    3528 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1210 05:57:21.267460    3528 ssh_runner.go:195] Run: which cri-dockerd
	I1210 05:57:21.274915    3528 command_runner.go:130] > /usr/bin/cri-dockerd
	I1210 05:57:21.278860    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 05:57:21.290698    3528 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 05:57:21.314565    3528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 05:57:21.466409    3528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 05:57:21.603844    3528 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 05:57:21.603844    3528 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 05:57:21.630009    3528 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 05:57:21.650723    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:21.786633    3528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 05:57:22.595739    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 05:57:22.618130    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 05:57:22.639399    3528 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1210 05:57:22.666084    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 05:57:22.689760    3528 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 05:57:22.826287    3528 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 05:57:22.966482    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:23.147658    3528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 05:57:23.173945    3528 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 05:57:23.199471    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:23.338742    3528 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 05:57:23.455945    3528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 05:57:23.474438    3528 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 05:57:23.478444    3528 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 05:57:23.486000    3528 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1210 05:57:23.486000    3528 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 05:57:23.486000    3528 command_runner.go:130] > Device: 0,112	Inode: 1768        Links: 1
	I1210 05:57:23.486000    3528 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1210 05:57:23.486000    3528 command_runner.go:130] > Access: 2025-12-10 05:57:23.343769342 +0000
	I1210 05:57:23.486000    3528 command_runner.go:130] > Modify: 2025-12-10 05:57:23.343769342 +0000
	I1210 05:57:23.486000    3528 command_runner.go:130] > Change: 2025-12-10 05:57:23.343769342 +0000
	I1210 05:57:23.486000    3528 command_runner.go:130] >  Birth: -
	I1210 05:57:23.486000    3528 start.go:564] Will wait 60s for crictl version
	I1210 05:57:23.490664    3528 ssh_runner.go:195] Run: which crictl
	I1210 05:57:23.496443    3528 command_runner.go:130] > /usr/local/bin/crictl
	I1210 05:57:23.501067    3528 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 05:57:23.549049    3528 command_runner.go:130] > Version:  0.1.0
	I1210 05:57:23.549049    3528 command_runner.go:130] > RuntimeName:  docker
	I1210 05:57:23.549049    3528 command_runner.go:130] > RuntimeVersion:  29.1.2
	I1210 05:57:23.549049    3528 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 05:57:23.549049    3528 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 05:57:23.552780    3528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 05:57:23.592051    3528 command_runner.go:130] > 29.1.2
	I1210 05:57:23.595007    3528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 05:57:23.630739    3528 command_runner.go:130] > 29.1.2
	I1210 05:57:23.635076    3528 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.2 ...
	I1210 05:57:23.638761    3528 cli_runner.go:164] Run: docker exec -t functional-871500 dig +short host.docker.internal
	I1210 05:57:23.765960    3528 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 05:57:23.770487    3528 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 05:57:23.780262    3528 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1210 05:57:23.784121    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:23.838579    3528 kubeadm.go:884] updating cluster {Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 05:57:23.838579    3528 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 05:57:23.841570    3528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1210 05:57:23.871575    3528 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1210 05:57:23.871575    3528 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:57:23.871575    3528 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1210 05:57:23.871575    3528 docker.go:621] Images already preloaded, skipping extraction
	I1210 05:57:23.875579    3528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/etcd:3.6.6-0
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1210 05:57:23.907148    3528 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1210 05:57:23.907148    3528 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:57:23.907148    3528 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1210 05:57:23.907148    3528 cache_images.go:86] Images are preloaded, skipping loading
	I1210 05:57:23.907148    3528 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1210 05:57:23.907668    3528 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-871500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 05:57:23.911609    3528 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 05:57:23.978720    3528 command_runner.go:130] > cgroupfs
	I1210 05:57:23.983482    3528 cni.go:84] Creating CNI manager for ""
	I1210 05:57:23.983482    3528 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 05:57:23.983482    3528 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 05:57:23.983482    3528 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-871500 NodeName:functional-871500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 05:57:23.983482    3528 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-871500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 05:57:23.987498    3528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 05:57:24.000182    3528 command_runner.go:130] > kubeadm
	I1210 05:57:24.000182    3528 command_runner.go:130] > kubectl
	I1210 05:57:24.000182    3528 command_runner.go:130] > kubelet
	I1210 05:57:24.000182    3528 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 05:57:24.004093    3528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 05:57:24.018408    3528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1210 05:57:24.041215    3528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 05:57:24.061272    3528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1210 05:57:24.082615    3528 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 05:57:24.095804    3528 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 05:57:24.101162    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:24.247994    3528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:57:24.548481    3528 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500 for IP: 192.168.49.2
	I1210 05:57:24.548481    3528 certs.go:195] generating shared ca certs ...
	I1210 05:57:24.549012    3528 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:57:24.549698    3528 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 05:57:24.549774    3528 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 05:57:24.549774    3528 certs.go:257] generating profile certs ...
	I1210 05:57:24.550590    3528 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\client.key
	I1210 05:57:24.550932    3528 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key.53a949a1
	I1210 05:57:24.550932    3528 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key
	I1210 05:57:24.550932    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 05:57:24.550932    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1210 05:57:24.550932    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 05:57:24.551460    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 05:57:24.551604    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 05:57:24.551764    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 05:57:24.551869    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 05:57:24.552075    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 05:57:24.552075    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 05:57:24.552075    3528 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 05:57:24.552617    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 05:57:24.552685    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 05:57:24.552685    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 05:57:24.552685    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 05:57:24.553394    3528 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 05:57:24.553588    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.553766    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem -> /usr/share/ca-certificates/11304.pem
	I1210 05:57:24.553869    3528 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> /usr/share/ca-certificates/113042.pem
	I1210 05:57:24.554786    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 05:57:24.581958    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 05:57:24.609312    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 05:57:24.634601    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 05:57:24.661713    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 05:57:24.690256    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 05:57:24.717784    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 05:57:24.748075    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 05:57:24.779590    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 05:57:24.808619    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 05:57:24.838348    3528 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 05:57:24.862790    3528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 05:57:24.888297    3528 ssh_runner.go:195] Run: openssl version
	I1210 05:57:24.898078    3528 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 05:57:24.902400    3528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.918304    3528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 05:57:24.936062    3528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.946045    3528 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.946080    3528 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.950017    3528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 05:57:24.993898    3528 command_runner.go:130] > b5213941
	I1210 05:57:24.999156    3528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 05:57:25.016159    3528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.034260    3528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 05:57:25.053147    3528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.061814    3528 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.061814    3528 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.065786    3528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 05:57:25.108176    3528 command_runner.go:130] > 51391683
	I1210 05:57:25.113321    3528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 05:57:25.129918    3528 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.147630    3528 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 05:57:25.167521    3528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.176125    3528 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.176125    3528 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.180991    3528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 05:57:25.223232    3528 command_runner.go:130] > 3ec20f2e
	I1210 05:57:25.227937    3528 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 05:57:25.244300    3528 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:57:25.251407    3528 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 05:57:25.251407    3528 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 05:57:25.251407    3528 command_runner.go:130] > Device: 8,48	Inode: 15342       Links: 1
	I1210 05:57:25.251407    3528 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 05:57:25.251407    3528 command_runner.go:130] > Access: 2025-12-10 05:53:12.664767007 +0000
	I1210 05:57:25.251407    3528 command_runner.go:130] > Modify: 2025-12-10 05:49:10.064289884 +0000
	I1210 05:57:25.251407    3528 command_runner.go:130] > Change: 2025-12-10 05:49:10.064289884 +0000
	I1210 05:57:25.251407    3528 command_runner.go:130] >  Birth: 2025-12-10 05:49:10.064289884 +0000
	I1210 05:57:25.255353    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 05:57:25.300587    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.306046    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 05:57:25.348642    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.354977    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 05:57:25.399294    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.403503    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 05:57:25.448300    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.453152    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 05:57:25.506357    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.511028    3528 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 05:57:25.553903    3528 command_runner.go:130] > Certificate will not expire
	I1210 05:57:25.554908    3528 kubeadm.go:401] StartCluster: {Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:57:25.558842    3528 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 05:57:25.593738    3528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 05:57:25.607577    3528 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 05:57:25.607628    3528 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 05:57:25.607628    3528 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 05:57:25.607628    3528 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 05:57:25.607628    3528 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 05:57:25.611091    3528 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 05:57:25.623212    3528 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 05:57:25.626623    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:25.680358    3528 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-871500" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:25.681186    3528 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-871500" cluster setting kubeconfig missing "functional-871500" context setting]
	I1210 05:57:25.681273    3528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:57:25.700123    3528 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:25.700864    3528 kapi.go:59] client config for functional-871500: &rest.Config{Host:"https://127.0.0.1:50086", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff61ff39080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 05:57:25.702157    3528 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 05:57:25.702219    3528 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 05:57:25.702289    3528 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 05:57:25.702331    3528 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 05:57:25.702331    3528 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 05:57:25.702331    3528 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 05:57:25.706500    3528 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 05:57:25.721533    3528 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1210 05:57:25.721533    3528 kubeadm.go:602] duration metric: took 113.9037ms to restartPrimaryControlPlane
	I1210 05:57:25.721533    3528 kubeadm.go:403] duration metric: took 166.6224ms to StartCluster
	I1210 05:57:25.721533    3528 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:57:25.721533    3528 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:25.722880    3528 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:57:25.723468    3528 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 05:57:25.723468    3528 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 05:57:25.723468    3528 addons.go:70] Setting storage-provisioner=true in profile "functional-871500"
	I1210 05:57:25.723468    3528 addons.go:70] Setting default-storageclass=true in profile "functional-871500"
	I1210 05:57:25.723468    3528 addons.go:239] Setting addon storage-provisioner=true in "functional-871500"
	I1210 05:57:25.723990    3528 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-871500"
	I1210 05:57:25.723990    3528 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 05:57:25.724039    3528 host.go:66] Checking if "functional-871500" exists ...
	I1210 05:57:25.727290    3528 out.go:179] * Verifying Kubernetes components...
	I1210 05:57:25.732528    3528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:57:25.733215    3528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:57:25.733847    3528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 05:57:25.784477    3528 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:57:25.784477    3528 kapi.go:59] client config for functional-871500: &rest.Config{Host:"https://127.0.0.1:50086", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff61ff39080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 05:57:25.785479    3528 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 05:57:25.785479    3528 addons.go:239] Setting addon default-storageclass=true in "functional-871500"
	I1210 05:57:25.785479    3528 host.go:66] Checking if "functional-871500" exists ...
	I1210 05:57:25.792481    3528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 05:57:25.809483    3528 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:57:25.812486    3528 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:25.812486    3528 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 05:57:25.815477    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:25.843475    3528 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:25.843475    3528 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 05:57:25.846475    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:25.863476    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:25.889481    3528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 05:57:25.893492    3528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 05:57:25.997793    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:26.023732    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:26.053186    3528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 05:57:26.112921    3528 node_ready.go:35] waiting up to 6m0s for node "functional-871500" to be "Ready" ...
	I1210 05:57:26.112921    3528 type.go:168] "Request Body" body=""
	I1210 05:57:26.113457    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:26.116638    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:26.133091    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.136407    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.136407    3528 retry.go:31] will retry after 345.217772ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.150366    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.202827    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.202827    3528 retry.go:31] will retry after 151.034764ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.359087    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:26.431671    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.436291    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.436291    3528 retry.go:31] will retry after 206.058838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.486383    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:26.557721    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.560620    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.560620    3528 retry.go:31] will retry after 499.995799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.648783    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:26.718122    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:26.721048    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:26.721048    3528 retry.go:31] will retry after 393.754282ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.063815    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:27.116921    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:27.116921    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:27.119587    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:27.119858    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:27.142617    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:27.145831    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.145969    3528 retry.go:31] will retry after 468.483229ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.204933    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:27.208432    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.208432    3528 retry.go:31] will retry after 855.193396ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.619421    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:27.706849    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:27.710739    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:27.710739    3528 retry.go:31] will retry after 912.738336ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:28.069754    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:28.120644    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:28.120644    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:28.123531    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:28.143254    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:28.148927    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:28.148927    3528 retry.go:31] will retry after 983.332816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:28.628567    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:28.701176    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:28.706795    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:28.706795    3528 retry.go:31] will retry after 1.385287928s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:29.123599    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:29.123599    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:29.126305    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:29.136958    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:29.206724    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:29.211387    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:29.211387    3528 retry.go:31] will retry after 1.736840395s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:30.096718    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:30.126845    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:30.126845    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:30.129697    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:30.181502    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:30.186062    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:30.186111    3528 retry.go:31] will retry after 1.361370091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:30.954728    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:31.028355    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:31.034556    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:31.034556    3528 retry.go:31] will retry after 1.491617713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:31.130593    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:31.130593    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:31.133462    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:31.553535    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:31.628770    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:31.634748    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:31.634748    3528 retry.go:31] will retry after 3.561022392s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:32.134739    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:32.134739    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:32.138071    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:32.531847    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:32.611685    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:32.617246    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:32.617246    3528 retry.go:31] will retry after 5.95380248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:33.138488    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:33.138875    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:33.141787    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:34.142311    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:34.142734    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:34.145176    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:35.146145    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:35.146145    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:35.148924    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:35.201546    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:35.276874    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:35.281183    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:35.281183    3528 retry.go:31] will retry after 3.730531418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:36.149846    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:36.149846    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:36.152788    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 05:57:36.152788    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:57:36.152788    3528 type.go:168] "Request Body" body=""
	I1210 05:57:36.152788    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:36.155425    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:37.155901    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:37.155901    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:37.159513    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:38.161109    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:38.161109    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:38.164724    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:38.577263    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:38.649489    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:38.652783    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:38.652883    3528 retry.go:31] will retry after 3.457172569s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:39.016926    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:39.102009    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:39.106825    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:39.106825    3528 retry.go:31] will retry after 7.958311304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:39.165052    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:39.165052    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:39.167612    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:40.168385    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:40.168385    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:40.171568    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:41.172124    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:41.172124    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:41.175998    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:42.114835    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:42.176733    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:42.176733    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:42.179377    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:42.194232    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:42.198994    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:42.198994    3528 retry.go:31] will retry after 11.400414998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:43.179774    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:43.179774    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:43.182962    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:44.183364    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:44.183364    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:44.186385    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:45.186936    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:45.187376    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:45.189591    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:46.190096    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:46.190096    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:46.196158    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	W1210 05:57:46.196158    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:57:46.196158    3528 type.go:168] "Request Body" body=""
	I1210 05:57:46.196158    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:46.198622    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:47.071512    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:47.150023    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:47.153571    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:47.153571    3528 retry.go:31] will retry after 8.685329621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:47.199356    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:47.199356    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:47.202855    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:48.203136    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:48.203136    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:48.209086    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1210 05:57:49.209940    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:49.209940    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:49.213512    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:50.214412    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:50.214412    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:50.218493    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:57:51.219009    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:51.219009    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:51.221689    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:52.221931    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:52.221931    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:52.224876    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:53.225848    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:53.225848    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:53.229481    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:53.604916    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:57:53.684553    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:53.688941    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:53.688941    3528 retry.go:31] will retry after 15.037235136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:54.230291    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:54.230291    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:54.233031    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:55.233749    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:55.233749    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:55.236864    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:55.845563    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:57:55.917684    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:57:55.920989    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:55.920989    3528 retry.go:31] will retry after 14.528574699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:57:56.237162    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:56.237162    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:56.240358    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:57:56.240358    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:57:56.240358    3528 type.go:168] "Request Body" body=""
	I1210 05:57:56.240358    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:56.242693    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:57:57.243108    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:57.243108    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:57.246459    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:58.247768    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:58.248150    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:58.251587    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:57:59.252608    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:57:59.252608    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:57:59.255751    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:00.256340    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:00.256340    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:00.259424    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:01.260417    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:01.260417    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:01.263835    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:02.264658    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:02.264976    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:02.268894    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:03.269646    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:03.270040    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:03.272742    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:04.273295    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:04.273295    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:04.276636    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:05.277239    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:05.277639    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:05.280629    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:06.281483    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:06.281483    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:06.285745    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	W1210 05:58:06.285802    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:06.285840    3528 type.go:168] "Request Body" body=""
	I1210 05:58:06.285987    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:06.288564    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:07.289127    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:07.289127    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:07.292563    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:08.293072    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:08.293072    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:08.297241    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:58:08.732392    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:58:08.811298    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:08.814895    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:08.814895    3528 retry.go:31] will retry after 24.059893548s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:09.297667    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:09.297667    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:09.300824    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:10.301402    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:10.301402    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:10.304411    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:10.455124    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:58:10.546239    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:10.546239    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:10.546239    3528 retry.go:31] will retry after 31.876597574s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:11.304978    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:11.304978    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:11.308149    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:12.308734    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:12.308734    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:12.311812    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:13.312561    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:13.313241    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:13.316204    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:14.317485    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:14.317883    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:14.320038    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:15.320460    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:15.320460    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:15.323420    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:16.323723    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:16.323723    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:16.326977    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:16.326977    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:16.327139    3528 type.go:168] "Request Body" body=""
	I1210 05:58:16.327227    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:16.329681    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:17.330932    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:17.330932    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:17.333882    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:18.334334    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:18.334798    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:18.338144    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:19.338534    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:19.338534    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:19.342989    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:58:20.343612    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:20.343612    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:20.346805    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:21.347681    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:21.347681    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:21.350863    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:22.351290    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:22.351290    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:22.354536    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:23.355239    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:23.355239    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:23.358499    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:24.359467    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:24.359467    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:24.364653    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1210 05:58:25.365025    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:25.365025    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:25.368433    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:26.369056    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:26.369056    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:26.372426    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:26.372457    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:26.372457    3528 type.go:168] "Request Body" body=""
	I1210 05:58:26.372457    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:26.374640    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:27.375624    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:27.375624    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:27.379448    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:28.380744    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:28.380744    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:28.384412    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:29.385100    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:29.385455    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:29.388161    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:30.388490    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:30.388490    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:30.391842    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:31.392294    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:31.392294    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:31.395842    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:32.397016    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:32.397016    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:32.399019    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:32.881902    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:58:32.967281    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:32.972519    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:32.972519    3528 retry.go:31] will retry after 41.610684516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 05:58:33.399525    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:33.399525    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:33.402804    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:34.403496    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:34.403496    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:34.406699    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:35.406992    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:35.406992    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:35.410007    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:36.410696    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:36.410696    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:36.414578    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:36.414673    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:36.414815    3528 type.go:168] "Request Body" body=""
	I1210 05:58:36.414864    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:36.417495    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:37.417917    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:37.418702    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:37.421367    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:38.421905    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:38.421905    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:38.424630    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:39.425767    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:39.426355    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:39.429576    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:40.429801    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:40.429801    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:40.433301    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:41.433959    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:41.433959    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:41.437621    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:42.429097    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 05:58:42.438217    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:42.438429    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:42.440917    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:42.509794    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:42.514955    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:58:42.515232    3528 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 05:58:43.441740    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:43.441740    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:43.444947    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:44.445672    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:44.445672    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:44.449361    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:45.449616    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:45.450071    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:45.452940    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:46.454145    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:46.454503    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:46.458078    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:46.458078    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:46.458078    3528 type.go:168] "Request Body" body=""
	I1210 05:58:46.458078    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:46.460173    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:47.460277    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:47.460277    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:47.462994    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:48.463438    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:48.463438    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:48.466303    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:49.467359    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:49.467359    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:49.471033    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:50.471353    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:50.471932    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:50.474800    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:51.475228    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:51.475228    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:51.478898    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:52.479596    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:52.479596    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:52.483072    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:53.483188    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:53.483188    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:53.486888    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:54.487194    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:54.487194    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:54.489701    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:55.490295    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:55.490295    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:55.494381    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:58:56.495339    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:56.495339    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:56.498522    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:58:56.498627    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:58:56.498685    3528 type.go:168] "Request Body" body=""
	I1210 05:58:56.498685    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:56.501262    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:57.501619    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:57.501619    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:57.504510    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:58:58.504988    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:58.504988    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:58.508232    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:58:59.508716    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:58:59.508952    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:58:59.512196    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:00.512876    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:00.512876    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:00.516262    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:01.516926    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:01.516926    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:01.519996    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:02.520867    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:02.520867    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:02.524453    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:03.525204    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:03.525204    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:03.528417    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:04.528800    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:04.528800    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:04.532496    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:05.533449    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:05.533449    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:05.535518    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:06.535694    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:06.535694    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:06.538826    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:06.538826    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:06.538826    3528 type.go:168] "Request Body" body=""
	I1210 05:59:06.538826    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:06.541732    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:07.542096    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:07.542096    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:07.546090    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:08.546686    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:08.546686    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:08.550042    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:09.551069    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:09.551069    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:09.554772    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:10.555918    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:10.556223    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:10.558373    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:11.559619    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:11.559619    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:11.562909    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:12.563393    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:12.563393    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:12.566949    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:13.567538    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:13.567538    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:13.570615    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:14.571357    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:14.571869    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:14.574910    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:14.588699    3528 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 05:59:14.659982    3528 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:59:14.662951    3528 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 05:59:14.662951    3528 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 05:59:14.667272    3528 out.go:179] * Enabled addons: 
	I1210 05:59:14.669291    3528 addons.go:530] duration metric: took 1m48.9444759s for enable addons: enabled=[]
	I1210 05:59:15.575548    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:15.575548    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:15.577957    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:16.578269    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:16.578269    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:16.581535    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:16.581626    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:16.581709    3528 type.go:168] "Request Body" body=""
	I1210 05:59:16.581757    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:16.584351    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:17.585087    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:17.585598    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:17.587811    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:18.588817    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:18.588817    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:18.593150    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:59:19.593863    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:19.593863    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:19.596290    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:20.596979    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:20.597284    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:20.600249    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:21.600500    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:21.600500    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:21.603751    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:22.603880    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:22.603880    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:22.608748    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:59:23.609127    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:23.609447    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:23.612322    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:24.613043    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:24.613043    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:24.616893    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:25.617546    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:25.617895    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:25.620726    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:26.620874    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:26.621261    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:26.624539    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:26.624539    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:26.624539    3528 type.go:168] "Request Body" body=""
	I1210 05:59:26.624539    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:26.627913    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:27.628729    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:27.628729    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:27.631708    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:28.632003    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:28.632003    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:28.635112    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:29.636254    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:29.636254    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:29.640073    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:30.640567    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:30.640567    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:30.643449    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:31.644603    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:31.644603    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:31.648321    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:32.648642    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:32.649007    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:32.651241    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:33.652555    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:33.652555    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:33.655647    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:34.656445    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:34.656445    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:34.659525    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:35.660470    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:35.660769    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:35.663511    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:36.663841    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:36.663841    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:36.667272    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:36.667368    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:36.667437    3528 type.go:168] "Request Body" body=""
	I1210 05:59:36.667551    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:36.671515    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:37.671899    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:37.671899    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:37.676101    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:59:38.676473    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:38.676473    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:38.679323    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:39.679890    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:39.679890    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:39.682898    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:40.683418    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:40.683418    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:40.687065    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:41.687380    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:41.687380    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:41.690398    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:42.691292    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:42.691292    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:42.693967    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:43.694336    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:43.694336    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:43.697547    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:44.697757    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:44.697757    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:44.700896    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:45.701213    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:45.701677    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:45.704167    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:46.704767    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:46.705237    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:46.708460    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 05:59:46.709023    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:46.709124    3528 type.go:168] "Request Body" body=""
	I1210 05:59:46.709198    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:46.711537    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:47.711814    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:47.712045    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:47.715217    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:48.716360    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:48.716360    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:48.719060    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:49.719847    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:49.719847    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:49.723779    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:50.724269    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:50.724269    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:50.728439    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 05:59:51.729126    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:51.729126    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:51.732791    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:52.734110    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:52.734110    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:52.738074    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:53.738271    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:53.738271    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:53.741809    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:54.742174    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:54.742174    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:54.746052    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:55.747079    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:55.747079    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:55.750285    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:56.750719    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:56.750719    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:56.753273    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 05:59:56.753273    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 05:59:56.753273    3528 type.go:168] "Request Body" body=""
	I1210 05:59:56.753273    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:56.755741    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:57.757283    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:57.757592    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:57.759856    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 05:59:58.761013    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:58.761013    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:58.764032    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 05:59:59.764386    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 05:59:59.764386    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 05:59:59.767579    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:00.767741    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:00.767741    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:00.771607    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:01.771831    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:01.771831    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:01.775356    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:02.775642    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:02.775642    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:02.779145    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:03.779411    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:03.779411    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:03.783151    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:04.783296    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:04.783296    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:04.786762    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:05.787153    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:05.787153    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:05.790518    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:06.790834    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:06.790834    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:06.794128    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:06.794128    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:06.794128    3528 type.go:168] "Request Body" body=""
	I1210 06:00:06.794660    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:06.796765    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:07.797318    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:07.797318    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:07.800177    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:08.801465    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:08.801465    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:08.804595    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:09.805061    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:09.805401    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:09.807835    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:10.808649    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:10.808991    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:10.811366    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:00:11.811812    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:11.811812    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:11.815185    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:12.815710    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:12.815710    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:12.819741    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:13.820205    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:13.820482    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:13.823243    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:14.823451    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:14.823451    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:14.826552    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:15.827102    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:15.827102    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:15.830239    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:16.830899    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:16.830899    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:16.833829    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:00:16.833829    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:16.834466    3528 type.go:168] "Request Body" body=""
	I1210 06:00:16.834489    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:16.836240    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:00:17.836565    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:17.836565    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:17.840343    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:18.840710    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:18.841040    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:18.844672    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:19.845082    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:19.845358    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:19.846852    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:00:20.848265    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:20.848265    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:20.851784    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:21.852223    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:21.852223    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:21.855023    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:22.856027    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:22.856027    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:22.859873    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:23.860923    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:23.860923    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:23.864261    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:24.864916    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:24.864916    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:24.868305    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:25.869078    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:25.869078    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:25.871509    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:26.871824    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:26.871824    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:26.875349    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:26.875349    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:26.875881    3528 type.go:168] "Request Body" body=""
	I1210 06:00:26.876020    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:26.878104    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:27.878306    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:27.878306    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:27.881296    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:28.882571    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:28.882571    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:28.885774    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:29.885982    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:29.885982    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:29.889065    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:30.889836    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:30.889836    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:30.892524    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:31.893215    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:31.893546    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:31.895912    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:32.897363    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:32.897363    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:32.900093    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:33.900778    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:33.900778    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:33.903568    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:34.904276    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:34.904276    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:34.907470    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:35.909284    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:35.909284    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:35.912316    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:36.913041    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:36.913041    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:36.916114    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:36.916643    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:36.916788    3528 type.go:168] "Request Body" body=""
	I1210 06:00:36.916788    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:36.918746    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:00:37.918978    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:37.918978    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:37.922454    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:38.922847    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:38.923075    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:38.926196    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:39.926491    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:39.926491    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:39.929932    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:40.930368    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:40.930368    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:40.934200    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:41.934738    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:41.934738    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:41.938126    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:42.939113    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:42.939113    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:42.941791    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:43.941991    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:43.942311    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:43.945177    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:44.945677    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:44.945677    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:44.949097    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:45.949865    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:45.950099    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:45.953257    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:46.953679    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:46.953679    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:46.957085    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:46.957085    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:46.957085    3528 type.go:168] "Request Body" body=""
	I1210 06:00:46.957085    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:46.959580    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:47.960148    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:47.960373    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:47.963579    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:48.964463    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:48.964463    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:48.967395    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:49.967782    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:49.967782    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:49.970748    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:50.971566    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:50.971566    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:50.974845    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:51.975483    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:51.976062    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:51.980347    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:00:52.980545    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:52.980545    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:52.983731    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:00:53.984073    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:53.984391    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:53.987349    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:54.988244    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:54.988244    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:54.992602    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:00:55.993170    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:55.993170    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:55.996092    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:56.996214    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:56.996214    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:56.999523    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:00:56.999523    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:00:57.000058    3528 type.go:168] "Request Body" body=""
	I1210 06:00:57.000148    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:57.002201    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:58.003156    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:58.003156    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:58.005615    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:00:59.006304    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:00:59.006304    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:00:59.009503    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:00.010519    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:00.010519    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:00.013059    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:01.013184    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:01.013184    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:01.017608    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:02.018033    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:02.018033    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:02.021448    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:03.022254    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:03.022604    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:03.025475    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:04.026637    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:04.026637    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:04.029792    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:05.030057    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:05.030057    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:05.033922    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:06.034438    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:06.034438    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:06.037480    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:07.038283    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:07.038283    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:07.041280    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:01:07.041328    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:07.041532    3528 type.go:168] "Request Body" body=""
	I1210 06:01:07.041606    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:07.044121    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:08.044522    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:08.044522    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:08.048047    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:09.048331    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:09.048331    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:09.051118    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:10.051651    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:10.051929    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:10.054948    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:11.055145    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:11.055564    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:11.058295    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:12.059200    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:12.059345    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:12.061763    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:13.062357    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:13.062357    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:13.067157    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:14.068007    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:14.068443    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:14.071405    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:15.071610    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:15.071610    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:15.075149    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:16.075929    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:16.075929    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:16.078363    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:17.078629    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:17.078629    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:17.082263    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:01:17.082399    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:17.082476    3528 type.go:168] "Request Body" body=""
	I1210 06:01:17.082601    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:17.084577    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:01:18.085283    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:18.085283    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:18.087761    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:19.089284    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:19.089284    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:19.093369    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:20.094032    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:20.094032    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:20.097108    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:21.097562    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:21.097562    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:21.104228    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	I1210 06:01:22.104512    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:22.104512    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:22.106967    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:23.107603    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:23.107603    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:23.110798    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:24.111778    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:24.111778    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:24.114416    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:25.115471    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:25.115471    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:25.118129    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:26.118485    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:26.118485    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:26.121278    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:27.121884    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:27.121884    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:27.125182    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:01:27.125182    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:27.125182    3528 type.go:168] "Request Body" body=""
	I1210 06:01:27.125182    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:27.127600    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:28.128000    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:28.128000    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:28.131773    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:29.132042    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:29.132453    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:29.135795    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:30.136052    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:30.136052    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:30.140250    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:31.140497    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:31.140975    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:31.143389    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:32.143469    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:32.144131    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:32.148568    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:01:33.148831    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:33.148831    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:33.152129    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:34.152786    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:34.152786    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:34.156156    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:35.156429    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:35.156429    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:35.159806    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:36.160061    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:36.160061    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:36.163126    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:37.163591    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:37.163591    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:37.166938    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:01:37.166938    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:37.167463    3528 type.go:168] "Request Body" body=""
	I1210 06:01:37.167518    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:37.169655    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:38.169997    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:38.169997    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:38.173075    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:39.173563    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:39.173563    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:39.177056    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:40.177923    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:40.177923    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:40.181566    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:41.182378    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:41.182378    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:41.185302    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:42.185967    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:42.185967    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:42.188700    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:43.189505    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:43.189505    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:43.192705    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:44.193063    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:44.193560    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:44.195918    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:45.196717    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:45.196717    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:45.200077    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:46.200329    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:46.200329    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:46.203250    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:47.204114    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:47.204114    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:47.206151    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:01:47.206151    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:47.206151    3528 type.go:168] "Request Body" body=""
	I1210 06:01:47.206692    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:47.209053    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:48.209387    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:48.209387    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:48.213313    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:49.213608    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:49.213608    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:49.217045    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:50.217195    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:50.217195    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:50.220141    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:51.220422    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:51.220422    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:51.223771    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:52.224601    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:52.224601    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:52.227794    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:53.228750    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:53.228750    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:53.231412    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:54.232114    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:54.232114    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:54.235027    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:55.235579    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:55.235983    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:55.238624    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:56.239321    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:56.239321    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:56.241809    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:01:57.242257    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:57.242257    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:57.245969    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:01:57.245969    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:01:57.245969    3528 type.go:168] "Request Body" body=""
	I1210 06:01:57.245969    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:57.248410    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:01:58.249059    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:58.249059    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:58.252337    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:01:59.252782    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:01:59.253339    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:01:59.255908    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:00.256663    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:00.257161    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:00.259603    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:01.260700    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:01.260700    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:01.263908    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:02.263994    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:02.264404    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:02.267730    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:03.268305    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:03.268305    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:03.271419    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:04.271604    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:04.271604    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:04.274704    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:05.275664    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:05.275664    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:05.278947    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:06.280127    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:06.280127    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:06.283728    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:07.284100    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:07.284100    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:07.286782    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:07.286782    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:07.287315    3528 type.go:168] "Request Body" body=""
	I1210 06:02:07.287315    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:07.289712    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:08.290003    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:08.290003    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:08.293335    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:09.293835    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:09.293835    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:09.296504    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:10.296683    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:10.296683    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:10.299600    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:11.300202    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:11.300202    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:11.303557    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:12.305092    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:12.305092    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:12.307542    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:13.308588    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:13.308588    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:13.312484    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:14.312766    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:14.312766    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:14.316277    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:15.317454    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:15.317454    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:15.320383    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:16.320913    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:16.320913    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:16.323576    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:17.323813    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:17.323813    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:17.326985    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:02:17.326985    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:17.326985    3528 type.go:168] "Request Body" body=""
	I1210 06:02:17.326985    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:17.329581    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:18.330187    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:18.330187    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:18.332737    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:19.333031    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:19.333031    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:19.335030    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1210 06:02:20.336555    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:20.336555    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:20.339555    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:21.340558    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:21.340558    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:21.342929    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:22.343239    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:22.343724    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:22.346810    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:23.347387    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:23.347387    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:23.350241    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:24.350796    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:24.350796    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:24.353724    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:25.354434    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:25.354772    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:25.357575    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:26.358016    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:26.358016    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:26.361246    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:27.362131    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:27.362479    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:27.365230    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:27.365813    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:27.365813    3528 type.go:168] "Request Body" body=""
	I1210 06:02:27.365813    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:27.368828    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:28.369580    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:28.369580    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:28.372320    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:29.372897    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:29.372897    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:29.376660    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:30.377760    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:30.377760    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:30.380415    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:31.381897    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:31.381897    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:31.385100    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:32.385291    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:32.385291    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:32.387374    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:33.389360    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:33.389360    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:33.393116    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:34.393502    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:34.393502    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:34.396152    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:35.396913    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:35.396913    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:35.401573    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:02:36.402190    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:36.402534    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:36.404711    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:37.405859    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:37.405859    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:37.408704    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:37.408838    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:37.408985    3528 type.go:168] "Request Body" body=""
	I1210 06:02:37.409077    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:37.412442    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:38.413079    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:38.413079    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:38.416332    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:39.416603    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:39.416603    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:39.420060    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:40.420482    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:40.420482    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:40.424152    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:41.424439    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:41.424439    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:41.427960    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:42.428547    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:42.428547    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:42.433716    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1210 06:02:43.434760    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:43.434760    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:43.437305    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:44.437929    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:44.437929    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:44.441911    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:45.442598    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:45.442598    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:45.445386    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:46.445563    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:46.445958    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:46.449188    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:47.450213    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:47.450868    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:47.453841    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:47.453841    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:47.453841    3528 type.go:168] "Request Body" body=""
	I1210 06:02:47.453841    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:47.457634    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:48.457929    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:48.457929    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:48.461148    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:49.461572    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:49.461572    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:49.464368    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:50.465569    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:50.465956    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:50.468785    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:51.469079    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:51.469079    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:51.473246    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:02:52.473693    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:52.473693    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:52.477423    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:53.477937    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:53.477937    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:53.481938    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1210 06:02:54.482839    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:54.482839    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:54.485813    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:55.486892    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:55.486892    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:55.490131    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:56.490554    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:56.490554    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:56.493887    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:57.494861    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:57.494861    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:57.497800    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:02:57.497800    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:02:57.497998    3528 type.go:168] "Request Body" body=""
	I1210 06:02:57.498076    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:57.500781    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:02:58.501021    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:58.501021    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:58.504136    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:02:59.504488    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:02:59.504969    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:02:59.507730    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:00.508009    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:00.508009    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:00.511476    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:01.512344    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:01.512344    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:01.515549    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:02.516467    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:02.516467    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:02.520405    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:03.520921    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:03.521256    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:03.524252    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:04.524513    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:04.524953    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:04.527628    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:05.529050    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:05.529050    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:05.536803    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=7
	I1210 06:03:06.537822    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:06.537822    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:06.541195    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:07.541552    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:07.541552    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:07.544874    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:03:07.544874    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:03:07.544874    3528 type.go:168] "Request Body" body=""
	I1210 06:03:07.544874    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:07.548078    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:08.548780    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:08.548969    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:08.551745    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:09.552670    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:09.552670    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:09.556239    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:10.556550    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:10.556906    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:10.559896    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:11.560632    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:11.560632    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:11.563477    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:12.564335    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:12.564335    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:12.567101    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:13.567254    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:13.567254    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:13.570684    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:14.571214    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:14.571214    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:14.573567    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:15.574056    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:15.574401    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:15.577034    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:16.577296    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:16.577296    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:16.580507    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:17.580670    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:17.580670    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:17.584345    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1210 06:03:17.584442    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): Get "https://127.0.0.1:50086/api/v1/nodes/functional-871500": EOF
	I1210 06:03:17.584620    3528 type.go:168] "Request Body" body=""
	I1210 06:03:17.584714    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:17.586766    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:18.587485    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:18.587485    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:18.590661    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:19.591695    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:19.592099    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:19.594643    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:20.595361    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:20.595361    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:20.597940    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:21.598595    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:21.598595    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:21.601244    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:22.601730    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:22.601730    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:22.604442    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:23.605664    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:23.605664    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:23.608404    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1210 06:03:24.609206    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:24.609206    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:24.612484    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1210 06:03:25.613066    3528 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:50086/api/v1/nodes/functional-871500"
	I1210 06:03:25.613066    3528 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:50086/api/v1/nodes/functional-871500" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1210 06:03:25.615998    3528 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1210 06:03:26.117891    3528 node_ready.go:55] error getting node "functional-871500" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1210 06:03:26.117891    3528 node_ready.go:38] duration metric: took 6m0.0004685s for node "functional-871500" to be "Ready" ...
	I1210 06:03:26.123026    3528 out.go:203] 
	W1210 06:03:26.125419    3528 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 06:03:26.125419    3528 out.go:285] * 
	W1210 06:03:26.127475    3528 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:03:26.130878    3528 out.go:203] 
	
	
	==> Docker <==
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.483189206Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.483194507Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.483214008Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.483249911Z" level=info msg="Initializing buildkit"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.582637464Z" level=info msg="Completed buildkit initialization"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.589253381Z" level=info msg="Daemon has completed initialization"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.589392791Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.589467497Z" level=info msg="API listen on [::]:2376"
	Dec 10 05:57:22 functional-871500 dockerd[10833]: time="2025-12-10T05:57:22.589490799Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 05:57:22 functional-871500 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 05:57:22 functional-871500 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 05:57:22 functional-871500 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 10 05:57:22 functional-871500 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 10 05:57:23 functional-871500 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Loaded network plugin cni"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 05:57:23 functional-871500 cri-dockerd[11152]: time="2025-12-10T05:57:23Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 05:57:23 functional-871500 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:05:39.434941   20605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:05:39.436093   20605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:05:39.437364   20605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:05:39.438709   20605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:05:39.439803   20605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001083] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001015] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000962] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000877] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001001] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 05:57] CPU: 2 PID: 55724 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000754] RIP: 0033:0x7fd067afcb20
	[  +0.000396] Code: Unable to access opcode bytes at RIP 0x7fd067afcaf6.
	[  +0.000673] RSP: 002b:00007ffe57c686d0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000778] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000787] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000893] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000747] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000734] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000747] FS:  0000000000000000 GS:  0000000000000000
	[  +0.824990] CPU: 8 PID: 55850 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000805] RIP: 0033:0x7f91646e5b20
	[  +0.000401] Code: Unable to access opcode bytes at RIP 0x7f91646e5af6.
	[  +0.000653] RSP: 002b:00007ffe3817fb80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000798] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000820] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000756] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000769] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000760] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001067] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:05:39 up  1:33,  0 user,  load average: 0.54, 0.39, 0.61
	Linux functional-871500 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:05:36 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:05:36 functional-871500 kubelet[20435]: E1210 06:05:36.585169   20435 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:05:36 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:05:36 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:05:37 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 990.
	Dec 10 06:05:37 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:05:37 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:05:37 functional-871500 kubelet[20448]: E1210 06:05:37.348588   20448 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:05:37 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:05:37 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:05:38 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 991.
	Dec 10 06:05:38 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:05:38 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:05:38 functional-871500 kubelet[20476]: E1210 06:05:38.103274   20476 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:05:38 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:05:38 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:05:38 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 992.
	Dec 10 06:05:38 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:05:38 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:05:38 functional-871500 kubelet[20506]: E1210 06:05:38.843661   20506 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:05:38 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:05:38 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:05:39 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 993.
	Dec 10 06:05:39 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:05:39 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500: exit status 2 (598.7599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-871500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (3.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (740.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-871500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1210 06:08:02.272151   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:09:25.339155   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:09:45.904676   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:12:48.987657   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:13:02.276705   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:14:45.907719   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-871500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m17.0918933s)

                                                
                                                
-- stdout --
	* [functional-871500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "functional-871500" primary control-plane node in "functional-871500" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000948554s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077699s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077699s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-871500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m17.1029328s for "functional-871500" cluster.
I1210 06:17:57.981744   11304 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-871500
helpers_test.go:244: (dbg) docker inspect functional-871500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8",
	        "Created": "2025-12-10T05:48:42.330122465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43983,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:48:42.614396787Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hosts",
	        "LogPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8-json.log",
	        "Name": "/functional-871500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-871500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-871500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-871500",
	                "Source": "/var/lib/docker/volumes/functional-871500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-871500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-871500",
	                "name.minikube.sigs.k8s.io": "functional-871500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f831997cbc8b0c45b2bf0420afea78f7ce282bcb79cb347c18a4cbe1cbe8de10",
	            "SandboxKey": "/var/run/docker/netns/f831997cbc8b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50082"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50083"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50085"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-871500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e1a843817431186dff999677a77b90e1cb216c0695cd37f7ec217b50cc77b815",
	                    "EndpointID": "c10c5ce0711796e291e0a729879c86799e1ac37c35b14f1d57f23910383e1c22",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-871500",
	                        "47edd36e1aff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500: exit status 2 (611.8265ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 logs -n 25: (1.3547997s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-493600 ssh pgrep buildkitd                                                                                 │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ image   │ functional-493600 image build -t localhost/my-image:functional-493600 testdata\build --alsologtostderr                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image   │ functional-493600 image ls                                                                                            │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image   │ functional-493600 image ls --format json --alsologtostderr                                                            │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ service │ functional-493600 service hello-node --url                                                                            │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ image   │ functional-493600 image ls --format table --alsologtostderr                                                           │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ delete  │ -p functional-493600                                                                                                  │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:48 UTC │ 10 Dec 25 05:48 UTC │
	│ start   │ -p functional-871500 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-rc.1 │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:48 UTC │                     │
	│ start   │ -p functional-871500 --alsologtostderr -v=8                                                                           │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:57 UTC │                     │
	│ cache   │ functional-871500 cache add registry.k8s.io/pause:3.1                                                                 │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ functional-871500 cache add registry.k8s.io/pause:3.3                                                                 │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ functional-871500 cache add registry.k8s.io/pause:latest                                                              │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ functional-871500 cache add minikube-local-cache-test:functional-871500                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ functional-871500 cache delete minikube-local-cache-test:functional-871500                                            │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                      │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ list                                                                                                                  │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo crictl images                                                                              │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo docker rmi registry.k8s.io/pause:latest                                                    │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │                     │
	│ cache   │ functional-871500 cache reload                                                                                        │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                      │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                   │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ kubectl │ functional-871500 kubectl -- --context functional-871500 get pods                                                     │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │                     │
	│ start   │ -p functional-871500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all              │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:05:40
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:05:40.939558    4268 out.go:360] Setting OutFile to fd 1136 ...
	I1210 06:05:40.981558    4268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:05:40.981558    4268 out.go:374] Setting ErrFile to fd 1864...
	I1210 06:05:40.981558    4268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:05:40.994563    4268 out.go:368] Setting JSON to false
	I1210 06:05:40.997553    4268 start.go:133] hostinfo: {"hostname":"minikube4","uptime":5672,"bootTime":1765341068,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 06:05:40.997553    4268 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 06:05:41.001553    4268 out.go:179] * [functional-871500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 06:05:41.004553    4268 notify.go:221] Checking for updates...
	I1210 06:05:41.007553    4268 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 06:05:41.009554    4268 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:05:41.013554    4268 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 06:05:41.018172    4268 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:05:41.020466    4268 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:05:41.023475    4268 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 06:05:41.023475    4268 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:05:41.199301    4268 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 06:05:41.203110    4268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:05:41.444620    4268 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-10 06:05:41.42593568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 06:05:41.449620    4268 out.go:179] * Using the docker driver based on existing profile
	I1210 06:05:41.451493    4268 start.go:309] selected driver: docker
	I1210 06:05:41.451493    4268 start.go:927] validating driver "docker" against &{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:05:41.451493    4268 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:05:41.457890    4268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:05:41.686631    4268 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-10 06:05:41.6698388 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Index
ServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Ex
pected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescrip
tion:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Program
Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 06:05:41.735496    4268 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:05:41.735496    4268 cni.go:84] Creating CNI manager for ""
	I1210 06:05:41.735496    4268 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 06:05:41.735496    4268 start.go:353] cluster config:
	{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:05:41.741018    4268 out.go:179] * Starting "functional-871500" primary control-plane node in "functional-871500" cluster
	I1210 06:05:41.744259    4268 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 06:05:41.749232    4268 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:05:41.752040    4268 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 06:05:41.752173    4268 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:05:41.752173    4268 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1210 06:05:41.752173    4268 cache.go:65] Caching tarball of preloaded images
	I1210 06:05:41.752485    4268 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1210 06:05:41.752621    4268 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1210 06:05:41.752768    4268 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\config.json ...
	I1210 06:05:41.832812    4268 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:05:41.832812    4268 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 06:05:41.832812    4268 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:05:41.832812    4268 start.go:360] acquireMachinesLock for functional-871500: {Name:mkaa7072cf669cebcb93feb3e66bf80897472d33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:05:41.832812    4268 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-871500"
	I1210 06:05:41.832812    4268 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:05:41.832812    4268 fix.go:54] fixHost starting: 
	I1210 06:05:41.839306    4268 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 06:05:41.895279    4268 fix.go:112] recreateIfNeeded on functional-871500: state=Running err=<nil>
	W1210 06:05:41.895279    4268 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:05:41.898650    4268 out.go:252] * Updating the running docker "functional-871500" container ...
	I1210 06:05:41.898650    4268 machine.go:94] provisionDockerMachine start ...
	I1210 06:05:41.901828    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:41.956991    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:41.957565    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:41.957565    4268 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:05:42.140179    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-871500
	
	I1210 06:05:42.140179    4268 ubuntu.go:182] provisioning hostname "functional-871500"
	I1210 06:05:42.144876    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:42.200094    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:42.200718    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:42.200718    4268 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-871500 && echo "functional-871500" | sudo tee /etc/hostname
	I1210 06:05:42.397029    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-871500
	
	I1210 06:05:42.400561    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:42.454568    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:42.455568    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:42.455568    4268 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-871500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-871500/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-871500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:05:42.650836    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:05:42.650836    4268 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 06:05:42.650836    4268 ubuntu.go:190] setting up certificates
	I1210 06:05:42.650836    4268 provision.go:84] configureAuth start
	I1210 06:05:42.655100    4268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 06:05:42.713113    4268 provision.go:143] copyHostCerts
	I1210 06:05:42.713113    4268 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 06:05:42.713113    4268 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 06:05:42.713113    4268 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 06:05:42.714114    4268 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 06:05:42.714114    4268 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 06:05:42.714114    4268 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 06:05:42.715113    4268 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 06:05:42.715113    4268 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 06:05:42.715113    4268 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 06:05:42.716114    4268 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-871500 san=[127.0.0.1 192.168.49.2 functional-871500 localhost minikube]
	I1210 06:05:42.798580    4268 provision.go:177] copyRemoteCerts
	I1210 06:05:42.802588    4268 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:05:42.805578    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:42.862278    4268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 06:05:42.996859    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:05:43.030822    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:05:43.062798    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:05:43.094379    4268 provision.go:87] duration metric: took 443.5373ms to configureAuth
	I1210 06:05:43.094426    4268 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:05:43.094529    4268 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 06:05:43.098320    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:43.157455    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:43.158049    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:43.158049    4268 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 06:05:43.340189    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 06:05:43.340189    4268 ubuntu.go:71] root file system type: overlay
	I1210 06:05:43.340189    4268 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 06:05:43.343620    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:43.397863    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:43.398871    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:43.398902    4268 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 06:05:43.595156    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 06:05:43.598799    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:43.653593    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:43.654604    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:43.654630    4268 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 06:05:43.838408    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:05:43.838408    4268 machine.go:97] duration metric: took 1.939733s to provisionDockerMachine
	I1210 06:05:43.838408    4268 start.go:293] postStartSetup for "functional-871500" (driver="docker")
	I1210 06:05:43.838408    4268 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:05:43.843330    4268 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:05:43.846525    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:43.900024    4268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 06:05:44.029680    4268 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:05:44.037541    4268 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:05:44.037541    4268 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:05:44.037541    4268 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 06:05:44.038189    4268 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 06:05:44.038189    4268 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 06:05:44.038757    4268 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts -> hosts in /etc/test/nested/copy/11304
	I1210 06:05:44.043153    4268 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11304
	I1210 06:05:44.055384    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 06:05:44.088733    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts --> /etc/test/nested/copy/11304/hosts (40 bytes)
	I1210 06:05:44.119280    4268 start.go:296] duration metric: took 280.8687ms for postStartSetup
	I1210 06:05:44.124009    4268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:05:44.126784    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:44.182044    4268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 06:05:44.316788    4268 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:05:44.324843    4268 fix.go:56] duration metric: took 2.4919994s for fixHost
	I1210 06:05:44.324843    4268 start.go:83] releasing machines lock for "functional-871500", held for 2.4919994s
	I1210 06:05:44.328923    4268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 06:05:44.381793    4268 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 06:05:44.385677    4268 ssh_runner.go:195] Run: cat /version.json
	I1210 06:05:44.386221    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:44.389012    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:44.441429    4268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 06:05:44.442469    4268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	W1210 06:05:44.560137    4268 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 06:05:44.563959    4268 ssh_runner.go:195] Run: systemctl --version
	I1210 06:05:44.577858    4268 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:05:44.589693    4268 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:05:44.594579    4268 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:05:44.610144    4268 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:05:44.610144    4268 start.go:496] detecting cgroup driver to use...
	I1210 06:05:44.610144    4268 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:05:44.610144    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:05:44.637889    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:05:44.661390    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:05:44.675857    4268 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:05:44.679682    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1210 06:05:44.688700    4268 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 06:05:44.688700    4268 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 06:05:44.703844    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:05:44.722937    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:05:44.745466    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:05:44.764651    4268 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:05:44.786058    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:05:44.803943    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:05:44.825767    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:05:44.844801    4268 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:05:44.865558    4268 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:05:44.882679    4268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:05:45.109626    4268 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:05:45.372410    4268 start.go:496] detecting cgroup driver to use...
	I1210 06:05:45.372488    4268 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:05:45.376725    4268 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 06:05:45.404975    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:05:45.427035    4268 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:05:45.453802    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:05:45.475732    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:05:45.493918    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:05:45.524028    4268 ssh_runner.go:195] Run: which cri-dockerd
	I1210 06:05:45.535197    4268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 06:05:45.548646    4268 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 06:05:45.572635    4268 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 06:05:45.724104    4268 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 06:05:45.868966    4268 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 06:05:45.869084    4268 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 06:05:45.901140    4268 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 06:05:45.921606    4268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:05:46.074547    4268 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 06:05:47.064088    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:05:47.086611    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 06:05:47.108595    4268 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1210 06:05:47.134813    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 06:05:47.157362    4268 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 06:05:47.294625    4268 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 06:05:47.445441    4268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:05:47.584076    4268 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 06:05:47.608696    4268 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 06:05:47.631875    4268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:05:47.796110    4268 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 06:05:47.918397    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 06:05:47.936744    4268 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 06:05:47.940567    4268 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 06:05:47.948674    4268 start.go:564] Will wait 60s for crictl version
	I1210 06:05:47.953390    4268 ssh_runner.go:195] Run: which crictl
	I1210 06:05:47.964351    4268 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:05:48.010041    4268 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 06:05:48.014800    4268 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 06:05:48.056120    4268 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 06:05:48.095316    4268 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.2 ...
	I1210 06:05:48.098689    4268 cli_runner.go:164] Run: docker exec -t functional-871500 dig +short host.docker.internal
	I1210 06:05:48.299568    4268 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 06:05:48.303921    4268 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 06:05:48.317690    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:48.374840    4268 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 06:05:48.377516    4268 kubeadm.go:884] updating cluster {Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:05:48.377840    4268 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 06:05:48.382038    4268 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 06:05:48.417200    4268 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-871500
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1210 06:05:48.417200    4268 docker.go:621] Images already preloaded, skipping extraction
	I1210 06:05:48.421745    4268 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 06:05:48.451984    4268 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-871500
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1210 06:05:48.451984    4268 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:05:48.451984    4268 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1210 06:05:48.451984    4268 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-871500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:05:48.455620    4268 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 06:05:48.856277    4268 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 06:05:48.856277    4268 cni.go:84] Creating CNI manager for ""
	I1210 06:05:48.856277    4268 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 06:05:48.856353    4268 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:05:48.856353    4268 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-871500 NodeName:functional-871500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:05:48.856531    4268 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-871500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:05:48.860333    4268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:05:48.875980    4268 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:05:48.881099    4268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:05:48.893740    4268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1210 06:05:48.914721    4268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:05:48.934821    4268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2073 bytes)
	I1210 06:05:48.960316    4268 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:05:48.972694    4268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:05:49.123118    4268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:05:49.255861    4268 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500 for IP: 192.168.49.2
	I1210 06:05:49.255861    4268 certs.go:195] generating shared ca certs ...
	I1210 06:05:49.255861    4268 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:05:49.256902    4268 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 06:05:49.257201    4268 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 06:05:49.257329    4268 certs.go:257] generating profile certs ...
	I1210 06:05:49.257955    4268 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\client.key
	I1210 06:05:49.257982    4268 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key.53a949a1
	I1210 06:05:49.257982    4268 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key
	I1210 06:05:49.259233    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 06:05:49.259785    4268 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 06:05:49.259886    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 06:05:49.260142    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 06:05:49.260323    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 06:05:49.260584    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 06:05:49.260858    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 06:05:49.261989    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:05:49.291586    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:05:49.322755    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:05:49.365403    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:05:49.393221    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:05:49.422952    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:05:49.452108    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:05:49.481059    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:05:49.509597    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 06:05:49.540303    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 06:05:49.570456    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:05:49.600563    4268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:05:49.625982    4268 ssh_runner.go:195] Run: openssl version
	I1210 06:05:49.646811    4268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 06:05:49.665986    4268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 06:05:49.688481    4268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 06:05:49.697316    4268 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 06:05:49.701997    4268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 06:05:49.756268    4268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:05:49.774475    4268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 06:05:49.792936    4268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 06:05:49.812585    4268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 06:05:49.820754    4268 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 06:05:49.824743    4268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 06:05:49.871530    4268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:05:49.889957    4268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:05:49.909516    4268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:05:49.930952    4268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:05:49.939674    4268 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:05:49.944280    4268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:05:49.991244    4268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:05:50.007593    4268 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:05:50.020119    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:05:50.067344    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:05:50.116460    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:05:50.165520    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:05:50.215057    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:05:50.263721    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:05:50.308021    4268 kubeadm.go:401] StartCluster: {Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:05:50.311614    4268 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 06:05:50.346733    4268 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:05:50.360552    4268 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:05:50.360580    4268 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:05:50.364548    4268 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:05:50.378578    4268 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:05:50.383414    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:50.435757    4268 kubeconfig.go:125] found "functional-871500" server: "https://127.0.0.1:50086"
	I1210 06:05:50.443021    4268 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:05:50.458083    4268 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 05:49:09.404233938 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 06:05:48.941571180 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 06:05:50.458083    4268 kubeadm.go:1161] stopping kube-system containers ...
	I1210 06:05:50.462114    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 06:05:50.496795    4268 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 06:05:50.522144    4268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:05:50.536445    4268 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 10 05:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 10 05:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 05:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 10 05:53 /etc/kubernetes/scheduler.conf
	
	I1210 06:05:50.540786    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:05:50.560978    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:05:50.573948    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:05:50.578606    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:05:50.598347    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:05:50.624166    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:05:50.628272    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:05:50.646130    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:05:50.660886    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:05:50.664931    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:05:50.683408    4268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:05:50.706370    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:05:50.943551    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:05:51.490493    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:05:51.736715    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:05:51.807636    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:05:51.910188    4268 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:05:51.914776    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:52.416327    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:52.915603    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:53.415591    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:53.915503    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:54.417765    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:54.915417    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:55.415417    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:55.915755    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:56.416253    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:56.915455    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:57.415861    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:57.915608    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:58.414964    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:58.916008    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:59.416023    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:59.916693    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:00.415637    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:00.915380    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:01.415701    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:01.915624    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:02.415007    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:02.915306    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:03.416586    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:03.916409    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:04.415626    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:04.916918    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:05.415662    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:05.915410    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:06.415782    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:06.915788    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:07.415237    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:07.915596    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:08.415151    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:08.915783    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:09.415452    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:09.915630    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:10.416137    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:10.915739    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:11.416340    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:11.916010    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:12.415711    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:12.915617    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:13.415590    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:13.916131    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:14.415833    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:14.915810    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:15.415434    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:15.916011    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:16.415715    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:16.916214    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:17.416569    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:17.915928    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:18.415760    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:18.915854    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:19.416023    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:19.915707    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:20.416022    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:20.915512    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:21.415449    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:21.915862    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:22.416187    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:22.915711    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:23.415407    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:23.916748    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:24.416067    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:24.915622    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:25.416460    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:25.916776    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:26.416986    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:26.915804    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:27.415924    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:27.915868    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:28.416289    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:28.915816    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:29.416455    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:29.916444    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:30.416956    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:30.917223    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:31.416570    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:31.916710    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:32.415252    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:32.916148    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:33.415760    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:33.915822    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:34.416279    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:34.915815    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:35.416215    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:35.916205    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:36.416507    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:36.915722    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:37.415763    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:37.915757    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:38.415942    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:38.915700    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:39.416506    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:39.915713    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:40.416558    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:40.916458    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:41.416738    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:41.916360    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:42.416858    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:42.916503    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:43.416468    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:43.915432    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:44.416286    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:44.915769    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:45.416376    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:45.916158    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:46.416260    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:46.916747    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:47.416302    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:47.915950    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:48.416456    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:48.916313    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:49.416114    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:49.916313    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:50.417029    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:50.916444    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:51.416929    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:51.915349    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:06:51.946488    4268 logs.go:282] 0 containers: []
	W1210 06:06:51.946488    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:06:51.950223    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:06:51.978835    4268 logs.go:282] 0 containers: []
	W1210 06:06:51.978835    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:06:51.982107    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:06:52.014720    4268 logs.go:282] 0 containers: []
	W1210 06:06:52.014720    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:06:52.018659    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:06:52.049849    4268 logs.go:282] 0 containers: []
	W1210 06:06:52.049849    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:06:52.053813    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:06:52.081237    4268 logs.go:282] 0 containers: []
	W1210 06:06:52.081237    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:06:52.085458    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:06:52.112058    4268 logs.go:282] 0 containers: []
	W1210 06:06:52.112058    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:06:52.115659    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:06:52.145147    4268 logs.go:282] 0 containers: []
	W1210 06:06:52.145147    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:06:52.145147    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:06:52.145147    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:06:52.208920    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:06:52.208920    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:06:52.238472    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:06:52.238472    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:06:52.325434    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:06:52.315654   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.316655   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.317934   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.318711   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.321223   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:06:52.315654   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.316655   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.317934   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.318711   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.321223   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:06:52.325434    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:06:52.325434    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:06:52.371108    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:06:52.371108    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:06:54.948530    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:54.972933    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:06:55.001036    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.001036    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:06:55.004290    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:06:55.032943    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.033029    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:06:55.036668    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:06:55.063474    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.063474    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:06:55.066822    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:06:55.095034    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.095034    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:06:55.098842    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:06:55.125575    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.125575    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:06:55.128696    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:06:55.158053    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.158053    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:06:55.161225    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:06:55.188975    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.188975    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:06:55.188975    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:06:55.188975    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:06:55.248739    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:06:55.248739    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:06:55.280459    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:06:55.280994    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:06:55.367741    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:06:55.357007   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.358211   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.360797   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.361943   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.363117   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:06:55.357007   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.358211   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.360797   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.361943   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.363117   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:06:55.367741    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:06:55.367741    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:06:55.414124    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:06:55.414124    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:06:57.973920    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:57.999748    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:06:58.030430    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.030430    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:06:58.034282    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:06:58.061116    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.061116    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:06:58.064723    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:06:58.091888    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.091888    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:06:58.095665    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:06:58.123935    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.123935    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:06:58.127445    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:06:58.154330    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.154330    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:06:58.157668    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:06:58.184825    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.184842    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:06:58.188704    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:06:58.215563    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.215563    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:06:58.215563    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:06:58.215563    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:06:58.279351    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:06:58.279351    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:06:58.309783    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:06:58.309783    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:06:58.393286    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:06:58.382107   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.383660   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.385217   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.386504   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.387262   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:06:58.382107   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.383660   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.385217   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.386504   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.387262   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:06:58.393286    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:06:58.393286    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:06:58.439058    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:06:58.439058    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:00.997523    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:01.021828    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:01.053542    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.053618    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:01.056677    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:01.085032    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.085032    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:01.088780    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:01.117302    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.117302    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:01.120752    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:01.148911    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.148911    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:01.152164    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:01.180119    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.180119    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:01.183696    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:01.213108    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.213108    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:01.216996    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:01.243946    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.243946    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:01.243946    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:01.243946    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:01.326430    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:01.314277   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.315265   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.319210   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.320225   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.321052   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:01.314277   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.315265   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.319210   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.320225   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.321052   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:01.326430    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:01.326459    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:01.370668    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:01.370668    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:01.422598    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:01.422598    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:01.484373    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:01.484373    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:04.021695    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:04.044749    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:04.073749    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.073749    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:04.077613    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:04.108271    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.108271    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:04.111712    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:04.140635    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.140635    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:04.143876    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:04.172340    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.172340    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:04.176392    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:04.202586    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.202586    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:04.207209    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:04.235404    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.235404    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:04.238669    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:04.269296    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.269296    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:04.269296    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:04.269296    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:04.333843    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:04.333843    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:04.363955    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:04.363955    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:04.444558    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:04.436237   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.437185   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.438566   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.439650   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.440909   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:04.436237   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.437185   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.438566   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.439650   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.440909   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:04.444558    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:04.445092    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:04.491255    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:04.491387    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:07.052134    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:07.075975    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:07.105912    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.105948    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:07.109453    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:07.138043    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.138043    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:07.141960    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:07.168363    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.168363    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:07.172168    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:07.199814    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.199814    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:07.204084    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:07.233711    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.233711    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:07.236936    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:07.264933    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.264933    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:07.268534    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:07.295981    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.295981    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:07.295981    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:07.295981    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:07.344067    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:07.344067    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:07.405677    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:07.405677    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:07.435735    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:07.435735    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:07.519926    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:07.510232   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.511256   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.513848   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.515885   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.517364   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:07.510232   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.511256   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.513848   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.515885   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.517364   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:07.519926    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:07.519926    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:10.070185    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:10.092250    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:10.122601    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.122601    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:10.128232    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:10.158544    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.158544    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:10.162689    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:10.190392    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.190392    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:10.194663    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:10.222107    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.222107    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:10.226125    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:10.252783    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.252783    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:10.256304    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:10.283397    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.283397    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:10.287203    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:10.315917    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.315961    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:10.315961    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:10.315997    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:10.379613    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:10.379613    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:10.413908    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:10.413937    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:10.494940    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:10.485289   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.486129   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.488300   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.489233   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.492215   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:10.485289   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.486129   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.488300   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.489233   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.492215   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:10.494940    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:10.494940    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:10.539292    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:10.539292    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:13.096499    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:13.120311    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:13.151343    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.151343    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:13.156101    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:13.187337    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.187337    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:13.190270    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:13.219411    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.219439    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:13.222798    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:13.249771    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.249771    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:13.253831    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:13.281375    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.281375    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:13.285787    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:13.313732    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.313732    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:13.317446    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:13.345700    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.345700    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:13.345700    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:13.345745    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:13.390315    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:13.390315    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:13.448999    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:13.448999    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:13.479056    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:13.479056    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:13.560071    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:13.549957   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.551004   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.553955   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.555549   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.557226   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:13.549957   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.551004   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.553955   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.555549   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.557226   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:13.560113    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:13.560113    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:16.115604    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:16.139172    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:16.166471    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.166471    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:16.169908    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:16.197926    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.197926    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:16.201554    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:16.228895    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.228895    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:16.233644    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:16.261634    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.261634    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:16.265293    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:16.290403    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.290403    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:16.294262    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:16.322219    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.322219    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:16.326037    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:16.354206    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.354206    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:16.354206    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:16.354206    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:16.419895    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:16.419895    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:16.451758    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:16.451758    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:16.530533    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:16.520075   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.522508   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.523655   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.525647   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.527182   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:16.520075   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.522508   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.523655   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.525647   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.527182   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:16.530563    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:16.530563    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:16.577832    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:16.577832    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:19.135824    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:19.161092    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:19.193445    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.193445    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:19.196612    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:19.224210    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.224263    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:19.227196    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:19.255555    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.255555    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:19.259039    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:19.288567    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.288567    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:19.292040    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:19.320589    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.320589    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:19.324658    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:19.351319    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.351319    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:19.355558    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:19.381847    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.381847    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:19.381847    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:19.381847    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:19.449609    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:19.449609    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:19.481141    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:19.481141    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:19.571805    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:19.560658   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.564410   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.566480   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.567250   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.569393   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:19.560658   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.564410   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.566480   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.567250   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.569393   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:19.571876    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:19.571876    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:19.618670    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:19.618670    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:22.172007    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:22.194631    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:22.223852    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.223852    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:22.227213    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:22.259065    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.259065    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:22.262548    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:22.294541    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.294541    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:22.297904    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:22.326231    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.326231    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:22.330450    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:22.355798    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.355798    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:22.359259    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:22.387519    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.387519    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:22.391049    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:22.418109    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.418109    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:22.418109    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:22.418109    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:22.499328    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:22.489790   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.490896   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.491903   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.494536   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.495501   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:22.489790   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.490896   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.491903   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.494536   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.495501   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:22.499328    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:22.499328    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:22.543726    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:22.543726    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:22.597115    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:22.597115    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:22.659436    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:22.659436    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:25.192803    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:25.217242    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:25.244925    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.244925    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:25.251081    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:25.278953    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.278953    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:25.282665    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:25.309347    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.309347    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:25.313377    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:25.341665    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.341665    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:25.345141    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:25.371901    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.371901    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:25.375742    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:25.403341    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.403365    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:25.406946    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:25.437008    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.437008    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:25.437008    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:25.437008    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:25.488060    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:25.488060    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:25.551490    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:25.551490    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:25.582172    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:25.582172    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:25.657523    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:25.647353   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.648357   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.649373   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.651014   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.652003   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:25.647353   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.648357   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.649373   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.651014   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.652003   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:25.657523    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:25.657523    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:28.209929    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:28.232843    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:28.261372    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.261372    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:28.265040    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:28.292477    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.292505    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:28.296009    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:28.320486    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.320486    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:28.324280    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:28.351296    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.351296    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:28.355074    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:28.390195    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.390195    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:28.394179    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:28.421613    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.421613    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:28.425545    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:28.453777    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.453777    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:28.453777    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:28.453777    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:28.499488    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:28.499488    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:28.561776    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:28.561776    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:28.593067    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:28.593112    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:28.668150    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:28.657513   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.658364   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.661163   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.662304   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.663565   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:28.657513   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.658364   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.661163   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.662304   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.663565   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:28.668150    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:28.668150    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:31.218151    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:31.240923    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:31.271844    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.271844    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:31.275477    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:31.301769    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.301769    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:31.305651    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:31.332406    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.332406    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:31.336005    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:31.363591    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.363591    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:31.366859    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:31.394594    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.394594    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:31.397901    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:31.427778    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.427801    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:31.431499    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:31.458018    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.458018    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:31.458052    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:31.458052    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:31.504698    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:31.504698    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:31.560046    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:31.560046    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:31.620436    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:31.620436    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:31.648931    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:31.648931    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:31.727951    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:31.718357   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.719615   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.720837   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.722218   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.723669   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:31.718357   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.719615   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.720837   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.722218   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.723669   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:34.232606    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:34.257055    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:34.288020    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.288020    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:34.291618    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:34.322496    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.322496    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:34.326328    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:34.354501    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.354501    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:34.358073    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:34.385199    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.385199    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:34.389140    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:34.414316    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.414316    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:34.418016    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:34.445073    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.445073    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:34.448529    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:34.479046    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.479046    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:34.479046    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:34.479113    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:34.540365    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:34.540365    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:34.571107    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:34.571107    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:34.651369    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:34.639849   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.640797   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.643867   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.644948   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.645803   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:34.639849   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.640797   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.643867   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.644948   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.645803   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:34.651369    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:34.651369    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:34.695236    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:34.695236    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:37.251178    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:37.274825    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:37.305218    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.305218    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:37.308994    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:37.338625    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.338625    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:37.342529    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:37.370849    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.370849    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:37.374620    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:37.403744    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.403744    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:37.407240    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:37.435170    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.435170    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:37.439347    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:37.464351    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.464351    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:37.468757    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:37.497371    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.497371    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:37.497371    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:37.497371    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:37.559564    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:37.559564    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:37.588662    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:37.588662    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:37.667884    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:37.657246   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.658358   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.659261   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.661714   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.662832   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:37.657246   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.658358   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.659261   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.661714   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.662832   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:37.667913    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:37.667913    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:37.713250    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:37.713250    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:40.270184    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:40.293820    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:40.321872    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.321872    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:40.325799    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:40.355617    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.355617    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:40.361421    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:40.389168    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.389168    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:40.393374    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:40.425493    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.425493    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:40.429344    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:40.458342    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.458342    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:40.462356    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:40.488885    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.488885    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:40.492942    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:40.521222    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.521222    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:40.521222    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:40.521222    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:40.571132    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:40.571132    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:40.622991    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:40.622991    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:40.680418    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:40.680418    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:40.710767    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:40.710767    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:40.786884    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:40.777278   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.778087   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.780838   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.781817   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.782760   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:40.777278   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.778087   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.780838   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.781817   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.782760   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:43.292302    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:43.316416    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:43.341307    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.341307    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:43.345027    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:43.370307    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.370307    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:43.374217    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:43.402135    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.402135    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:43.405647    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:43.433991    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.434045    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:43.437705    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:43.465221    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.465221    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:43.468945    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:43.494153    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.494153    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:43.497409    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:43.526559    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.526559    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:43.526559    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:43.526559    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:43.592034    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:43.592034    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:43.621625    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:43.621625    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:43.699225    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:43.688896   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.689744   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.691973   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.692804   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.695050   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:43.688896   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.689744   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.691973   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.692804   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.695050   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:43.699225    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:43.699225    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:43.742683    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:43.742683    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:46.296260    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:46.320038    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:46.350083    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.350127    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:46.354017    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:46.392667    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.392667    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:46.396040    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:46.423477    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.423477    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:46.427089    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:46.457044    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.457044    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:46.461309    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:46.492133    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.492133    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:46.496367    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:46.523683    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.523683    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:46.528125    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:46.556662    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.556662    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:46.556662    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:46.556662    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:46.622661    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:46.622661    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:46.653087    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:46.653087    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:46.737036    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:46.725117   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.726037   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.729627   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.731599   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.733777   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:46.725117   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.726037   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.729627   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.731599   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.733777   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:46.737036    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:46.737036    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:46.781873    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:46.781873    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:49.335832    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:49.359246    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:49.391481    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.391481    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:49.395372    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:49.425639    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.425639    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:49.429616    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:49.457273    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.457273    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:49.460755    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:49.490445    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.490445    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:49.496643    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:49.526292    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.526292    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:49.530371    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:49.557314    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.557359    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:49.561590    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:49.591753    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.591753    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:49.591753    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:49.591753    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:49.621767    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:49.621767    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:49.707223    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:49.697858   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.698899   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.699785   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.703604   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.704517   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:49.697858   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.698899   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.699785   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.703604   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.704517   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:49.707223    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:49.707223    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:49.751158    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:49.751158    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:49.799885    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:49.799885    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:52.366303    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:52.390862    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:52.425737    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.425770    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:52.429505    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:52.457550    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.457550    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:52.461709    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:52.488406    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.488406    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:52.492766    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:52.518703    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.518703    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:52.522666    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:52.550619    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.550619    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:52.554570    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:52.583512    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.583512    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:52.587153    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:52.614737    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.614737    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:52.614737    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:52.614811    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:52.677940    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:52.677940    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:52.709363    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:52.709363    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:52.791705    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:52.781560   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.782422   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.785208   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.786343   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.787080   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:52.781560   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.782422   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.785208   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.786343   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.787080   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:52.791705    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:52.791705    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:52.835266    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:52.835266    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:55.404989    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:55.433031    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:55.462583    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.462583    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:55.466139    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:55.492223    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.492223    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:55.495759    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:55.523357    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.523357    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:55.530265    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:55.561457    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.561457    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:55.565257    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:55.594178    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.594178    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:55.599162    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:55.627914    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.627914    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:55.632194    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:55.659551    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.659551    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:55.659551    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:55.659551    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:55.705228    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:55.705228    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:55.758018    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:55.758018    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:55.819730    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:55.819730    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:55.848800    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:55.848800    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:55.933602    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:55.919237   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.920249   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.924524   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.925340   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.926446   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:55.919237   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.920249   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.924524   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.925340   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.926446   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:58.439191    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:58.463828    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:58.497407    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.497407    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:58.500686    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:58.530436    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.530436    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:58.533685    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:58.561959    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.561959    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:58.566417    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:58.596302    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.596302    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:58.600866    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:58.629840    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.629840    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:58.633617    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:58.660127    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.660127    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:58.663612    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:58.692189    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.692189    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:58.692189    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:58.692189    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:58.754556    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:58.754556    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:58.784251    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:58.784251    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:58.866899    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:58.854125   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.855115   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.856391   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.857985   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.859051   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:58.854125   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.855115   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.856391   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.857985   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.859051   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:58.866899    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:58.866899    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:58.914793    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:58.914793    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:01.470823    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:01.494469    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:01.522381    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.522381    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:01.528647    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:01.558012    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.558012    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:01.564708    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:01.593835    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.593835    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:01.599056    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:01.623982    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.623982    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:01.627479    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:01.658260    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.658260    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:01.665836    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:01.697664    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.697664    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:01.702191    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:01.729816    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.729816    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:01.729816    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:01.729816    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:01.788909    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:01.788909    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:01.819503    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:01.819503    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:01.901569    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:01.889489   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.890512   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.891524   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.892377   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.894500   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:01.889489   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.890512   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.891524   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.892377   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.894500   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:01.901569    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:01.901569    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:01.947339    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:01.947339    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:04.502871    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:04.526200    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:04.558543    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.558543    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:04.563525    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:04.595332    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.595332    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:04.598770    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:04.630572    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.630572    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:04.635710    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:04.664369    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.664369    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:04.668951    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:04.699382    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.699382    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:04.702341    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:04.732274    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.732274    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:04.735620    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:04.763772    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.763772    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:04.763772    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:04.763866    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:04.790890    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:04.790890    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:04.872353    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:04.859391   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.860351   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.864058   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.865079   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.866076   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:04.859391   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.860351   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.864058   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.865079   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.866076   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:04.872353    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:04.872353    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:04.916959    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:04.916959    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:04.965485    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:04.965560    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:07.533039    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:07.559067    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:07.588219    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.588219    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:07.591689    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:07.619350    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.619350    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:07.622996    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:07.652464    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.652464    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:07.657960    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:07.688918    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.688918    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:07.692848    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:07.722521    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.722521    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:07.726603    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:07.755963    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.755963    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:07.760630    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:07.790252    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.790252    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:07.790252    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:07.790327    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:07.852838    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:07.852838    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:07.883838    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:07.883838    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:07.961862    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:07.950474   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.951452   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.952747   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.954027   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.955132   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:07.950474   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.951452   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.952747   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.954027   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.955132   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:07.961862    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:07.961862    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:08.003991    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:08.003991    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:10.563653    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:10.586319    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:10.613645    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.613645    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:10.617237    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:10.646795    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.646795    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:10.652694    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:10.683833    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.683833    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:10.688294    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:10.718409    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.718409    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:10.722444    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:10.746660    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.746660    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:10.751527    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:10.781904    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.781904    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:10.787205    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:10.814738    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.814738    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:10.814738    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:10.814792    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:10.841682    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:10.841682    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:10.922604    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:10.910990   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.911994   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.912519   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.915063   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.916345   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:10.910990   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.911994   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.912519   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.915063   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.916345   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:10.922639    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:10.922661    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:10.968300    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:10.968300    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:11.016711    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:11.016711    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:13.584862    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:13.607945    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:13.639757    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.639757    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:13.643362    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:13.673001    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.673001    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:13.676417    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:13.706241    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.706241    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:13.710040    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:13.735617    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.735840    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:13.738750    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:13.768821    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.768821    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:13.772175    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:13.801535    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.801535    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:13.805351    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:13.832881    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.832881    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:13.832881    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:13.832881    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:13.860208    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:13.860208    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:13.946278    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:13.935217   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.936421   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.937560   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.939101   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.940407   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:13.935217   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.936421   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.937560   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.939101   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.940407   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:13.946278    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:13.946278    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:13.991759    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:13.991759    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:14.045144    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:14.045144    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:16.612310    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:16.638180    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:16.667851    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.667851    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:16.671631    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:16.700699    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.700699    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:16.706277    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:16.734906    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.734906    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:16.738957    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:16.766394    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.766394    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:16.772893    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:16.802581    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.802581    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:16.808905    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:16.836566    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.836566    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:16.840142    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:16.868091    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.868091    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:16.868091    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:16.868091    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:16.897687    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:16.897687    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:16.975509    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:16.963204   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.964299   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.965894   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.966720   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.968954   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:16.963204   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.964299   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.965894   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.966720   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.968954   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:16.975509    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:16.975509    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:17.020453    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:17.020453    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:17.069748    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:17.069748    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:19.636799    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:19.659733    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:19.690968    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.690968    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:19.694619    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:19.722863    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.722863    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:19.726187    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:19.752031    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.752031    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:19.755396    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:19.783376    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.783376    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:19.786987    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:19.814219    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.814219    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:19.817751    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:19.847004    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.847004    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:19.850402    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:19.881752    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.881752    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:19.881752    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:19.881752    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:19.930019    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:19.930019    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:19.983089    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:19.983089    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:20.045802    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:20.045802    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:20.077460    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:20.077460    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:20.162436    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:20.151708   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.152740   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.154010   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.155291   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.156364   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:20.151708   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.152740   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.154010   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.155291   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.156364   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:22.668475    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:22.691439    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:22.721661    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.721661    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:22.725309    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:22.754031    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.754031    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:22.758027    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:22.785864    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.785864    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:22.789619    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:22.817384    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.817384    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:22.820727    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:22.851186    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.851186    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:22.855014    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:22.883476    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.883476    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:22.887734    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:22.914588    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.914588    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:22.914588    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:22.914588    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:22.977189    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:22.977189    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:23.007230    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:23.007230    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:23.085937    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:23.073621   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.076302   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.077595   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.078777   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.080139   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:23.073621   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.076302   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.077595   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.078777   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.080139   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:23.085937    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:23.085937    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:23.128830    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:23.128830    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:25.690109    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:25.713674    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:25.742134    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.742164    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:25.745613    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:25.771702    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.771789    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:25.775334    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:25.803239    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.803239    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:25.806686    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:25.836716    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.836716    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:25.840387    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:25.867927    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.867927    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:25.871435    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:25.898205    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.898205    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:25.901920    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:25.931569    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.931569    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:25.931569    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:25.931569    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:25.995604    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:25.995604    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:26.025733    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:26.025733    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:26.107058    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:26.094116   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.098292   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.099172   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.100188   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.101258   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:26.094116   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.098292   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.099172   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.100188   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.101258   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:26.107115    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:26.107115    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:26.150320    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:26.150320    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:28.710236    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:28.735443    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:28.764680    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.764680    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:28.768537    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:28.795455    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.795455    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:28.799570    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:28.826729    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.826729    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:28.830406    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:28.859191    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.859191    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:28.862919    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:28.888542    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.888542    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:28.892494    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:28.919951    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.919951    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:28.923351    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:28.952838    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.952838    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:28.952838    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:28.952909    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:29.034485    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:29.023348   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.024187   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.026875   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.028120   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.029114   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:29.023348   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.024187   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.026875   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.028120   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.029114   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:29.034485    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:29.034485    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:29.079092    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:29.079092    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:29.133555    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:29.133555    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:29.195221    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:29.195221    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:31.733591    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:31.757690    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:31.790674    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.790674    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:31.794674    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:31.825657    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.825721    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:31.829403    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:31.858023    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.858023    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:31.861500    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:31.890867    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.890914    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:31.894490    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:31.922953    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.922953    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:31.927186    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:31.954090    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.954090    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:31.957750    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:31.984886    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.984920    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:31.984920    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:31.984951    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:32.048671    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:32.048671    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:32.079259    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:32.079259    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:32.157323    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:32.146579   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.147719   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.148633   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.150758   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.151551   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:32.146579   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.147719   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.148633   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.150758   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.151551   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:32.157323    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:32.157323    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:32.203321    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:32.203321    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:34.760108    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:34.782876    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:34.810927    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.810927    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:34.814663    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:34.839714    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.839714    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:34.843722    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:34.870089    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.870089    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:34.873513    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:34.905367    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.905367    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:34.909301    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:34.938914    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.938914    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:34.942767    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:34.972329    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.972329    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:34.976046    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:35.000780    4268 logs.go:282] 0 containers: []
	W1210 06:08:35.000780    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:35.000780    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:35.000838    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:35.065353    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:35.065353    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:35.095634    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:35.095634    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:35.171365    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:35.160656   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.162343   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.163491   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.165073   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.166057   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:35.160656   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.162343   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.163491   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.165073   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.166057   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:35.171365    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:35.171365    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:35.215605    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:35.215605    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:37.774322    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:37.798677    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:37.827936    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.827990    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:37.831228    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:37.860987    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.861065    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:37.864478    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:37.891877    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.891877    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:37.895716    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:37.920808    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.920808    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:37.924309    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:37.952553    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.952553    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:37.956204    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:37.985826    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.985826    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:37.989201    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:38.017309    4268 logs.go:282] 0 containers: []
	W1210 06:08:38.017309    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:38.017309    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:38.017309    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:38.082876    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:38.083876    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:38.113796    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:38.113821    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:38.196088    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:38.184048   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.187012   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.188966   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.190400   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.191695   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:38.184048   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.187012   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.188966   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.190400   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.191695   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:38.196123    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:38.196149    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:38.241227    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:38.241227    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:40.798944    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:40.821450    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:40.850414    4268 logs.go:282] 0 containers: []
	W1210 06:08:40.850414    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:40.853927    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:40.881239    4268 logs.go:282] 0 containers: []
	W1210 06:08:40.881239    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:40.885281    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:40.912960    4268 logs.go:282] 0 containers: []
	W1210 06:08:40.912960    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:40.918840    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:40.950469    4268 logs.go:282] 0 containers: []
	W1210 06:08:40.950469    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:40.954401    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:40.982375    4268 logs.go:282] 0 containers: []
	W1210 06:08:40.982375    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:40.986123    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:41.016542    4268 logs.go:282] 0 containers: []
	W1210 06:08:41.016542    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:41.019622    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:41.049577    4268 logs.go:282] 0 containers: []
	W1210 06:08:41.049662    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:41.049662    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:41.049694    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:41.076753    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:41.076753    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:41.160411    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:41.148000   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.148852   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.151925   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.154289   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.155876   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:41.148000   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.148852   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.151925   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.154289   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.155876   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:41.160445    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:41.160473    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:41.206612    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:41.206612    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:41.253715    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:41.253715    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:43.821604    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:43.845650    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:43.874167    4268 logs.go:282] 0 containers: []
	W1210 06:08:43.874207    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:43.877812    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:43.905508    4268 logs.go:282] 0 containers: []
	W1210 06:08:43.905508    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:43.909372    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:43.939372    4268 logs.go:282] 0 containers: []
	W1210 06:08:43.939426    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:43.942841    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:43.972078    4268 logs.go:282] 0 containers: []
	W1210 06:08:43.972078    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:43.975697    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:44.002329    4268 logs.go:282] 0 containers: []
	W1210 06:08:44.002329    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:44.005898    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:44.035821    4268 logs.go:282] 0 containers: []
	W1210 06:08:44.035821    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:44.039602    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:44.066798    4268 logs.go:282] 0 containers: []
	W1210 06:08:44.066839    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:44.066839    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:44.066839    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:44.128660    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:44.128660    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:44.159235    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:44.159235    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:44.242361    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:44.231367   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.232316   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.235308   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.236181   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.238800   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:44.231367   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.232316   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.235308   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.236181   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.238800   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:44.242361    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:44.242361    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:44.289326    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:44.289326    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:46.852233    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:46.874656    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:46.903255    4268 logs.go:282] 0 containers: []
	W1210 06:08:46.903255    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:46.907117    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:46.935108    4268 logs.go:282] 0 containers: []
	W1210 06:08:46.935108    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:46.938584    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:46.967525    4268 logs.go:282] 0 containers: []
	W1210 06:08:46.967525    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:46.973772    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:47.001558    4268 logs.go:282] 0 containers: []
	W1210 06:08:47.001558    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:47.005083    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:47.034015    4268 logs.go:282] 0 containers: []
	W1210 06:08:47.034015    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:47.039271    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:47.068459    4268 logs.go:282] 0 containers: []
	W1210 06:08:47.068459    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:47.071981    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:47.102013    4268 logs.go:282] 0 containers: []
	W1210 06:08:47.102013    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:47.102044    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:47.102065    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:47.164592    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:47.164592    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:47.195491    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:47.195491    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:47.278044    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:47.265991   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.268610   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.269567   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.271904   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.272596   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:47.265991   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.268610   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.269567   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.271904   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.272596   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:47.278044    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:47.278044    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:47.324863    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:47.324863    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:49.880727    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:49.903789    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:49.935342    4268 logs.go:282] 0 containers: []
	W1210 06:08:49.935342    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:49.938737    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:49.965312    4268 logs.go:282] 0 containers: []
	W1210 06:08:49.965312    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:49.968607    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:49.996188    4268 logs.go:282] 0 containers: []
	W1210 06:08:49.996188    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:50.001257    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:50.027750    4268 logs.go:282] 0 containers: []
	W1210 06:08:50.027750    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:50.031128    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:50.062729    4268 logs.go:282] 0 containers: []
	W1210 06:08:50.062803    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:50.067118    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:50.095830    4268 logs.go:282] 0 containers: []
	W1210 06:08:50.095830    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:50.099864    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:50.130283    4268 logs.go:282] 0 containers: []
	W1210 06:08:50.130283    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:50.130283    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:50.130283    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:50.193360    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:50.193360    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:50.221703    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:50.221703    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:50.303176    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:50.293680   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.294854   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.296200   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.298483   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.299446   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:50.293680   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.294854   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.296200   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.298483   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.299446   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:50.303176    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:50.303176    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:50.370163    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:50.370163    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:52.928303    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:52.953491    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:52.981271    4268 logs.go:282] 0 containers: []
	W1210 06:08:52.981271    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:52.985316    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:53.013881    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.013881    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:53.017036    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:53.045261    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.045261    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:53.049312    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:53.077577    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.077577    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:53.080557    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:53.110750    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.110750    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:53.114132    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:53.141372    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.141372    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:53.145576    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:53.175705    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.175705    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:53.175705    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:53.175705    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:53.237519    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:53.237519    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:53.267260    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:53.267260    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:53.363780    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:53.355380   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.356544   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.357888   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.359124   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.360377   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:53.355380   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.356544   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.357888   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.359124   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.360377   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:53.363780    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:53.363780    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:53.409834    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:53.409834    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:55.976440    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:56.001300    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:56.033852    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.033852    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:56.037643    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:56.065934    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.065934    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:56.072377    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:56.102560    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.102560    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:56.106392    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:56.143025    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.143025    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:56.149239    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:56.176909    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.176909    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:56.180641    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:56.208166    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.208227    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:56.211221    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:56.240358    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.240358    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:56.240358    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:56.240358    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:56.303618    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:56.303618    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:56.333844    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:56.333844    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:56.416014    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:56.406081   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.406955   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.408179   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.409154   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.410395   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:56.406081   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.406955   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.408179   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.409154   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.410395   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:56.416014    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:56.416014    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:56.461496    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:56.461496    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:59.013428    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:59.038379    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:59.067727    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.067758    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:59.071379    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:59.104272    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.104272    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:59.107653    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:59.133866    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.133866    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:59.137442    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:59.164317    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.164317    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:59.168171    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:59.198264    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.198291    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:59.202014    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:59.229252    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.229252    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:59.233058    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:59.262804    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.262837    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:59.262837    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:59.262866    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:59.309986    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:59.309986    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:59.362017    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:59.362052    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:59.422749    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:59.422749    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:59.453982    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:59.453982    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:59.534843    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:59.524756   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.525914   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.526844   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.529305   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.530549   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:59.524756   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.525914   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.526844   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.529305   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.530549   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:02.039970    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:02.063736    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:02.094049    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.094049    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:02.097680    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:02.124934    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.124934    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:02.130724    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:02.158566    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.158566    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:02.162548    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:02.188736    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.188736    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:02.192205    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:02.222271    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.222271    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:02.225729    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:02.256473    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.256473    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:02.260671    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:02.287011    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.287011    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:02.287011    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:02.287011    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:02.392011    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:02.382734   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.383733   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.385038   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.386241   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.387283   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:02.382734   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.383733   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.385038   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.386241   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.387283   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:02.392011    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:02.392011    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:02.440008    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:02.440008    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:02.494764    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:02.494764    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:02.553322    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:02.553322    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:05.090291    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:05.112936    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:05.141630    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.141630    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:05.144882    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:05.180128    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.180128    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:05.184542    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:05.213219    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.213219    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:05.216935    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:05.244351    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.244351    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:05.248038    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:05.277710    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.277760    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:05.281504    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:05.310297    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.310297    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:05.314071    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:05.352094    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.352094    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:05.352094    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:05.352094    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:05.398783    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:05.398896    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:05.458685    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:05.458685    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:05.489319    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:05.489319    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:05.565657    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:05.556044   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.557996   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.559537   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.561579   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.562708   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:05.556044   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.557996   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.559537   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.561579   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.562708   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:05.565657    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:05.565657    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:08.115745    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:08.138736    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:08.171066    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.171066    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:08.174894    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:08.201941    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.201941    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:08.205547    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:08.233859    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.233859    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:08.237566    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:08.264996    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.264996    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:08.269259    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:08.294641    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.294641    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:08.298901    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:08.350200    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.350200    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:08.356240    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:08.383315    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.383315    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:08.383354    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:08.383372    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:08.448982    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:08.448982    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:08.479093    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:08.479093    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:08.560338    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:08.549727   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.550675   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.553111   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.554353   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.555159   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:08.549727   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.550675   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.553111   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.554353   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.555159   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:08.560338    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:08.560338    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:08.606173    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:08.606173    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:11.159744    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:11.183765    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:11.210674    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.210698    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:11.214341    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:11.240117    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.240117    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:11.243522    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:11.272551    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.272551    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:11.276401    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:11.305619    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.305619    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:11.309310    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:11.360405    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.360447    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:11.363925    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:11.393251    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.393251    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:11.397006    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:11.426962    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.426962    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:11.426962    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:11.426962    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:11.477327    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:11.477327    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:11.532161    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:11.532161    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:11.592212    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:11.592212    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:11.622686    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:11.622686    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:11.705726    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:11.693925   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.694871   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.698826   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.701149   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.702201   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:11.693925   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.694871   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.698826   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.701149   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.702201   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:14.210675    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:14.234399    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:14.264863    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.264863    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:14.268775    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:14.300413    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.300413    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:14.304487    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:14.346847    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.346847    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:14.350643    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:14.380435    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.380435    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:14.384376    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:14.412797    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.412797    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:14.416519    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:14.447397    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.447397    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:14.450969    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:14.478632    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.478695    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:14.478695    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:14.478695    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:14.528915    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:14.528915    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:14.588962    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:14.588962    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:14.618677    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:14.618677    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:14.700289    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:14.688765   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.691863   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.695446   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.696305   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.697431   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:14.688765   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.691863   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.695446   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.696305   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.697431   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:14.700289    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:14.700289    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:17.249092    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:17.272763    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:17.300862    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.300952    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:17.306099    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:17.346725    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.346725    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:17.350199    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:17.377982    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.377982    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:17.380998    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:17.409995    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.409995    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:17.414294    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:17.442988    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.442988    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:17.449120    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:17.475982    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.475982    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:17.479552    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:17.506308    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.506308    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:17.506308    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:17.506308    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:17.553141    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:17.553141    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:17.607169    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:17.607169    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:17.668742    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:17.668742    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:17.697789    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:17.697789    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:17.779510    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:17.770911   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.772114   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.773487   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.774333   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.776764   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:17.770911   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.772114   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.773487   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.774333   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.776764   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:20.283521    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:20.307295    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:20.338053    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.338053    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:20.341656    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:20.372543    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.372543    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:20.376481    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:20.403212    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.403212    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:20.406617    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:20.433422    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.433422    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:20.437081    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:20.465523    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.465523    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:20.469716    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:20.497769    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.497769    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:20.501184    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:20.528203    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.528203    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:20.528203    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:20.528203    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:20.604309    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:20.596677   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.597696   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.598827   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.599955   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.601237   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:20.596677   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.597696   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.598827   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.599955   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.601237   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:20.604309    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:20.604309    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:20.649121    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:20.649121    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:20.700336    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:20.700336    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:20.761156    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:20.761156    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:23.296453    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:23.318440    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:23.351977    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.351977    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:23.355449    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:23.384390    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.384413    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:23.387748    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:23.416613    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.416613    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:23.422740    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:23.447410    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.447410    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:23.450859    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:23.481298    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.481298    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:23.484812    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:23.510855    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.510855    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:23.514267    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:23.543042    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.543042    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:23.543042    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:23.543042    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:23.608264    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:23.608264    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:23.639456    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:23.639491    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:23.717275    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:23.706870   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.707871   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.711802   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.713025   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.715049   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:23.706870   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.707871   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.711802   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.713025   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.715049   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:23.717275    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:23.717319    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:23.761563    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:23.761563    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:26.321131    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:26.344893    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:26.376780    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.376780    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:26.380359    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:26.408268    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.408268    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:26.411660    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:26.440862    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.440862    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:26.444048    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:26.473546    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.473546    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:26.476599    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:26.505151    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.505151    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:26.508748    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:26.538121    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.538121    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:26.542550    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:26.569122    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.569122    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:26.569122    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:26.569122    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:26.629615    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:26.629615    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:26.660648    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:26.660648    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:26.741888    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:26.730118   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.731561   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.735001   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.735931   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.737367   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:26.730118   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.731561   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.735001   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.735931   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.737367   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:26.741888    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:26.741888    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:26.787954    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:26.787954    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:29.348252    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:29.372474    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:29.401265    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.401265    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:29.404730    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:29.435756    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.435805    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:29.439300    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:29.470279    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.470279    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:29.474091    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:29.502410    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.502410    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:29.505917    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:29.535595    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.535595    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:29.539532    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:29.568556    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.568556    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:29.572020    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:29.599739    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.599739    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:29.599739    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:29.599739    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:29.661483    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:29.661483    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:29.691565    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:29.691565    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:29.774718    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:29.764825   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.765629   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.768157   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.769097   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.770255   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:29.764825   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.765629   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.768157   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.769097   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.770255   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:29.774718    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:29.774718    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:29.816878    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:29.816878    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:32.374472    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:32.397027    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:32.429904    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.429904    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:32.433647    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:32.460698    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.460756    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:32.464368    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:32.491682    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.491682    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:32.495066    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:32.523531    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.523531    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:32.526773    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:32.557102    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.557102    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:32.563482    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:32.591959    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.591959    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:32.595725    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:32.625486    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.625486    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:32.625486    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:32.625486    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:32.688451    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:32.688451    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:32.719004    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:32.719004    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:32.800020    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:32.788607   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.789314   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.791558   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.792611   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.793305   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:32.788607   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.789314   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.791558   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.792611   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.793305   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:32.800020    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:32.800020    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:32.849061    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:32.849061    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:35.404633    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:35.429425    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:35.458232    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.458277    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:35.462316    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:35.489097    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.489097    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:35.492725    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:35.522979    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.522979    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:35.526587    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:35.555948    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.555948    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:35.559915    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:35.589220    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.589220    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:35.592883    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:35.619789    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.619850    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:35.622872    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:35.649510    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.649534    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:35.649534    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:35.649534    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:35.714882    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:35.715881    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:35.745666    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:35.745666    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:35.825749    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:35.812454   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.813402   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.819556   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.820578   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.821180   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:35.812454   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.813402   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.819556   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.820578   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.821180   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:35.825749    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:35.825749    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:35.871102    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:35.871102    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:38.430887    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:38.453030    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:38.484706    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.484706    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:38.488140    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:38.517210    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.517210    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:38.521162    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:38.549348    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.549348    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:38.553103    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:38.580109    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.580109    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:38.583794    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:38.613855    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.613934    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:38.618771    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:38.647097    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.647097    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:38.650932    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:38.680610    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.680610    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:38.680610    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:38.680682    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:38.758813    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:38.749300   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.750109   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.753125   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.753957   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.756268   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:38.749300   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.750109   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.753125   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.753957   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.756268   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:38.758813    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:38.758813    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:38.807873    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:38.807873    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:38.867039    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:38.867067    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:38.926759    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:38.926759    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:41.462739    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:41.490464    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:41.518622    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.518622    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:41.524470    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:41.551685    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.551685    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:41.556977    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:41.584962    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.584962    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:41.588808    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:41.620594    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.620594    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:41.624185    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:41.656800    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.656800    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:41.659821    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:41.692628    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.692628    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:41.696287    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:41.726090    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.726090    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:41.726090    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:41.726090    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:41.803427    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:41.793678   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.794849   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.796092   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.797004   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.799523   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:41.793678   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.794849   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.796092   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.797004   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.799523   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:41.803427    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:41.803427    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:41.849170    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:41.849170    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:41.903654    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:41.903654    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:41.962299    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:41.962299    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:44.500876    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:44.523403    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:44.554849    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.554849    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:44.558352    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:44.588012    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.588012    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:44.591883    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:44.617831    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.617831    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:44.621490    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:44.648689    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.648689    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:44.652490    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:44.684042    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.684042    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:44.687539    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:44.716817    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.716856    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:44.720738    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:44.747250    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.747250    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:44.747250    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:44.747318    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:44.798396    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:44.798396    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:44.858678    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:44.858678    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:44.888995    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:44.888995    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:44.964778    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:44.955796   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.956638   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.958906   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.960018   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.961253   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:44.955796   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.956638   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.958906   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.960018   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.961253   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:44.964778    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:44.964778    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:47.517925    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:47.541890    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:47.573716    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.573716    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:47.577684    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:47.606333    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.606333    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:47.610098    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:47.635733    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.635733    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:47.639327    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:47.669406    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.669406    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:47.673219    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:47.700633    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.700633    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:47.705121    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:47.733323    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.733323    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:47.737104    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:47.763071    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.763071    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:47.763071    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:47.763140    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:47.826821    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:47.826821    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:47.856590    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:47.856590    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:47.933339    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:47.922383   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.923323   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.927777   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.928818   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.930519   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:47.922383   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.923323   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.927777   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.928818   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.930519   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:47.933339    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:47.933339    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:47.979012    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:47.979012    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:50.532699    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:50.557240    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:50.585813    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.585813    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:50.589369    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:50.622124    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.622124    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:50.625576    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:50.650920    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.650920    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:50.653943    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:50.682545    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.682545    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:50.686340    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:50.715893    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.715893    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:50.719099    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:50.748297    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.748297    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:50.751451    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:50.779846    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.779866    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:50.779890    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:50.779890    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:50.830198    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:50.830198    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:50.891330    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:50.891330    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:50.921331    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:50.921331    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:51.001029    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:50.991827   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.992701   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.996634   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.997913   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.999128   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:50.991827   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.992701   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.996634   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.997913   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.999128   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:51.001029    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:51.001029    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:53.554507    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:53.573659    4268 kubeadm.go:602] duration metric: took 4m3.2099315s to restartPrimaryControlPlane
	W1210 06:09:53.573659    4268 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 06:09:53.578070    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1210 06:09:54.057699    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:09:54.081355    4268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:09:54.095306    4268 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:09:54.099578    4268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:09:54.113717    4268 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:09:54.113717    4268 kubeadm.go:158] found existing configuration files:
	
	I1210 06:09:54.118539    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:09:54.131350    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:09:54.135225    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:09:54.152710    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:09:54.164770    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:09:54.168898    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:09:54.185476    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:09:54.198490    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:09:54.202839    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:09:54.221180    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:09:54.234980    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:09:54.239197    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:09:54.256185    4268 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:09:54.367900    4268 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 06:09:54.450675    4268 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:09:54.549884    4268 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:13:55.304144    4268 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:13:55.304213    4268 kubeadm.go:319] 
	I1210 06:13:55.304353    4268 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:13:55.308106    4268 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:13:55.308252    4268 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:13:55.308389    4268 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:13:55.308682    4268 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 06:13:55.308682    4268 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 06:13:55.308682    4268 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 06:13:55.308682    4268 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 06:13:55.308682    4268 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 06:13:55.308682    4268 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 06:13:55.309221    4268 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 06:13:55.309347    4268 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 06:13:55.309347    4268 kubeadm.go:319] CONFIG_INET: enabled
	I1210 06:13:55.309347    4268 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 06:13:55.309347    4268 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 06:13:55.309347    4268 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 06:13:55.309881    4268 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 06:13:55.310536    4268 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 06:13:55.310642    4268 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 06:13:55.310721    4268 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 06:13:55.310721    4268 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 06:13:55.310721    4268 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 06:13:55.310721    4268 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 06:13:55.310721    4268 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 06:13:55.310721    4268 kubeadm.go:319] OS: Linux
	I1210 06:13:55.310721    4268 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:13:55.311254    4268 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:13:55.311367    4268 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:13:55.311538    4268 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:13:55.311670    4268 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:13:55.311750    4268 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:13:55.311824    4268 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:13:55.311865    4268 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:13:55.311865    4268 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:13:55.311865    4268 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:13:55.311865    4268 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:13:55.311865    4268 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:13:55.312446    4268 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:13:55.316886    4268 out.go:252]   - Generating certificates and keys ...
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:13:55.317855    4268 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:13:55.317855    4268 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:13:55.317855    4268 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:13:55.317855    4268 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:13:55.321599    4268 out.go:252]   - Booting up control plane ...
	I1210 06:13:55.322123    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:13:55.323161    4268 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:13:55.323161    4268 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:13:55.323161    4268 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000948554s
	I1210 06:13:55.323161    4268 kubeadm.go:319] 
	I1210 06:13:55.323161    4268 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:13:55.323161    4268 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:13:55.323161    4268 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:13:55.323161    4268 kubeadm.go:319] 
	I1210 06:13:55.323161    4268 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:13:55.323161    4268 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:13:55.324159    4268 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:13:55.324159    4268 kubeadm.go:319] 
	W1210 06:13:55.324159    4268 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000948554s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:13:55.329361    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1210 06:13:55.788774    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:13:55.807235    4268 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:13:55.812328    4268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:13:55.824166    4268 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:13:55.824166    4268 kubeadm.go:158] found existing configuration files:
	
	I1210 06:13:55.829624    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:13:55.842900    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:13:55.846743    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:13:55.863007    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:13:55.876646    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:13:55.881322    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:13:55.900836    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:13:55.916668    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:13:55.921481    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:13:55.939813    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:13:55.954759    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:13:55.960058    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:13:55.976998    4268 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:13:56.092783    4268 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 06:13:56.183907    4268 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:13:56.283504    4268 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:17:56.874768    4268 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:17:56.874768    4268 kubeadm.go:319] 
	I1210 06:17:56.875332    4268 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:17:56.883860    4268 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:17:56.883860    4268 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:17:56.883860    4268 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:17:56.883860    4268 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 06:17:56.884428    4268 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_INET: enabled
	I1210 06:17:56.884973    4268 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 06:17:56.885550    4268 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 06:17:56.885585    4268 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 06:17:56.885585    4268 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 06:17:56.885585    4268 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 06:17:56.885585    4268 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 06:17:56.885585    4268 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 06:17:56.886100    4268 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] OS: Linux
	I1210 06:17:56.886147    4268 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:17:56.886670    4268 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:17:56.887297    4268 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:17:56.887297    4268 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:17:56.887297    4268 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:17:56.890313    4268 out.go:252]   - Generating certificates and keys ...
	I1210 06:17:56.890313    4268 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:17:56.890313    4268 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:17:56.890313    4268 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:17:56.890313    4268 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:17:56.890917    4268 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:17:56.891009    4268 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:17:56.891147    4268 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:17:56.891147    4268 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:17:56.891147    4268 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:17:56.891147    4268 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:17:56.891147    4268 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:17:56.892230    4268 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:17:56.892299    4268 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:17:56.896667    4268 out.go:252]   - Booting up control plane ...
	I1210 06:17:56.896667    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:17:56.896667    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:17:56.897260    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:17:56.897260    4268 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:17:56.897260    4268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:17:56.897780    4268 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:17:56.897839    4268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:17:56.897839    4268 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:17:56.897839    4268 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:17:56.897839    4268 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:17:56.897839    4268 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00077699s
	I1210 06:17:56.897839    4268 kubeadm.go:319] 
	I1210 06:17:56.897839    4268 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:17:56.897839    4268 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:17:56.897839    4268 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:17:56.897839    4268 kubeadm.go:319] 
	I1210 06:17:56.898801    4268 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:17:56.898801    4268 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:17:56.898801    4268 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:17:56.898801    4268 kubeadm.go:319] 
	I1210 06:17:56.898801    4268 kubeadm.go:403] duration metric: took 12m6.5812244s to StartCluster
	I1210 06:17:56.898801    4268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:17:56.902808    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:17:57.138118    4268 cri.go:89] found id: ""
	I1210 06:17:57.138148    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.138172    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:17:57.138172    4268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:17:57.142698    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:17:57.185021    4268 cri.go:89] found id: ""
	I1210 06:17:57.185021    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.185021    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:17:57.185092    4268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:17:57.189241    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:17:57.228303    4268 cri.go:89] found id: ""
	I1210 06:17:57.228350    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.228350    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:17:57.228350    4268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:17:57.233381    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:17:57.304677    4268 cri.go:89] found id: ""
	I1210 06:17:57.304677    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.304677    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:17:57.304677    4268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:17:57.309206    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:17:57.355436    4268 cri.go:89] found id: ""
	I1210 06:17:57.355436    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.355436    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:17:57.355436    4268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:17:57.359252    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:17:57.404878    4268 cri.go:89] found id: ""
	I1210 06:17:57.404878    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.404878    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:17:57.404878    4268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:17:57.409876    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:17:57.451416    4268 cri.go:89] found id: ""
	I1210 06:17:57.451416    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.451499    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:17:57.451499    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:17:57.451499    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:17:57.506664    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:17:57.506764    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:17:57.578699    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:17:57.578699    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:17:57.610293    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:17:57.610293    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:17:57.852641    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:17:57.840732   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.841622   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.844268   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.845648   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.846764   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:17:57.840732   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.841622   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.844268   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.845648   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.846764   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:17:57.852641    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:17:57.852641    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1210 06:17:57.899832    4268 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077699s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:17:57.899832    4268 out.go:285] * 
	W1210 06:17:57.899832    4268 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077699s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:17:57.900356    4268 out.go:285] * 
	W1210 06:17:57.902683    4268 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:17:57.916933    4268 out.go:203] 
	W1210 06:17:57.920352    4268 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077699s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:17:57.920907    4268 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:17:57.921055    4268 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:17:57.924778    4268 out.go:203] 
	
	
	==> Docker <==
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939273296Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939278496Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939300298Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939330401Z" level=info msg="Initializing buildkit"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.048285619Z" level=info msg="Completed buildkit initialization"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057400499Z" level=info msg="Daemon has completed initialization"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057637121Z" level=info msg="API listen on [::]:2376"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057662524Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057681026Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 06:05:47 functional-871500 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 06:05:47 functional-871500 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 06:05:47 functional-871500 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 10 06:05:47 functional-871500 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 10 06:05:47 functional-871500 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Loaded network plugin cni"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 06:05:47 functional-871500 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:17:59.893921   40255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:59.895465   40255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:59.897165   40255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:59.898555   40255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:59.899502   40255 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000820] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000756] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000769] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000760] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001067] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 06:05] CPU: 0 PID: 66176 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000804] RIP: 0033:0x7faea69bcb20
	[  +0.000404] Code: Unable to access opcode bytes at RIP 0x7faea69bcaf6.
	[  +0.000646] RSP: 002b:00007ffe61c16590 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000914] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000859] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000854] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000785] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000766] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000758] FS:  0000000000000000 GS:  0000000000000000
	[  +0.894437] CPU: 10 PID: 66302 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000900] RIP: 0033:0x7fd9e8de1b20
	[  +0.000422] Code: Unable to access opcode bytes at RIP 0x7fd9e8de1af6.
	[  +0.000734] RSP: 002b:00007ffc83151e80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000839] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000834] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000828] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000825] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000826] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000826] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:17:59 up  1:46,  0 user,  load average: 0.27, 0.30, 0.44
	Linux functional-871500 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:17:56 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:17:57 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 10 06:17:57 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:17:57 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:17:57 functional-871500 kubelet[40007]: E1210 06:17:57.302830   40007 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:17:57 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:17:57 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:17:57 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 10 06:17:57 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:17:57 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:17:58 functional-871500 kubelet[40105]: E1210 06:17:58.053610   40105 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:17:58 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:17:58 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:17:58 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 10 06:17:58 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:17:58 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:17:58 functional-871500 kubelet[40134]: E1210 06:17:58.764802   40134 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:17:58 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:17:58 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:17:59 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 10 06:17:59 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:17:59 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:17:59 functional-871500 kubelet[40159]: E1210 06:17:59.522359   40159 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:17:59 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:17:59 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500: exit status 2 (580.5886ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-871500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (740.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (54.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-871500 get po -l tier=control-plane -n kube-system -o=json
E1210 06:18:02.280393   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-871500 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (50.3440581s)

                                                
                                                
** stderr ** 
	E1210 06:18:11.767205   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:18:21.855252   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:18:31.894414   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:18:41.934709   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:18:51.979284   11800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-871500 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-871500
helpers_test.go:244: (dbg) docker inspect functional-871500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8",
	        "Created": "2025-12-10T05:48:42.330122465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43983,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:48:42.614396787Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hosts",
	        "LogPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8-json.log",
	        "Name": "/functional-871500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-871500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-871500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-871500",
	                "Source": "/var/lib/docker/volumes/functional-871500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-871500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-871500",
	                "name.minikube.sigs.k8s.io": "functional-871500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f831997cbc8b0c45b2bf0420afea78f7ce282bcb79cb347c18a4cbe1cbe8de10",
	            "SandboxKey": "/var/run/docker/netns/f831997cbc8b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50082"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50083"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50085"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-871500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e1a843817431186dff999677a77b90e1cb216c0695cd37f7ec217b50cc77b815",
	                    "EndpointID": "c10c5ce0711796e291e0a729879c86799e1ac37c35b14f1d57f23910383e1c22",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-871500",
	                        "47edd36e1aff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500: exit status 2 (672.998ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 logs -n 25: (1.7038935s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-493600 ssh pgrep buildkitd                                                                                 │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ image   │ functional-493600 image build -t localhost/my-image:functional-493600 testdata\build --alsologtostderr                │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image   │ functional-493600 image ls                                                                                            │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ image   │ functional-493600 image ls --format json --alsologtostderr                                                            │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ service │ functional-493600 service hello-node --url                                                                            │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │                     │
	│ image   │ functional-493600 image ls --format table --alsologtostderr                                                           │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:43 UTC │ 10 Dec 25 05:43 UTC │
	│ delete  │ -p functional-493600                                                                                                  │ functional-493600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:48 UTC │ 10 Dec 25 05:48 UTC │
	│ start   │ -p functional-871500 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-rc.1 │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:48 UTC │                     │
	│ start   │ -p functional-871500 --alsologtostderr -v=8                                                                           │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:57 UTC │                     │
	│ cache   │ functional-871500 cache add registry.k8s.io/pause:3.1                                                                 │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ functional-871500 cache add registry.k8s.io/pause:3.3                                                                 │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ functional-871500 cache add registry.k8s.io/pause:latest                                                              │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ functional-871500 cache add minikube-local-cache-test:functional-871500                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ functional-871500 cache delete minikube-local-cache-test:functional-871500                                            │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                      │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ list                                                                                                                  │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo crictl images                                                                              │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo docker rmi registry.k8s.io/pause:latest                                                    │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │                     │
	│ cache   │ functional-871500 cache reload                                                                                        │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                      │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                   │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ kubectl │ functional-871500 kubectl -- --context functional-871500 get pods                                                     │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │                     │
	│ start   │ -p functional-871500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all              │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:05:40
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:05:40.939558    4268 out.go:360] Setting OutFile to fd 1136 ...
	I1210 06:05:40.981558    4268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:05:40.981558    4268 out.go:374] Setting ErrFile to fd 1864...
	I1210 06:05:40.981558    4268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:05:40.994563    4268 out.go:368] Setting JSON to false
	I1210 06:05:40.997553    4268 start.go:133] hostinfo: {"hostname":"minikube4","uptime":5672,"bootTime":1765341068,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 06:05:40.997553    4268 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 06:05:41.001553    4268 out.go:179] * [functional-871500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 06:05:41.004553    4268 notify.go:221] Checking for updates...
	I1210 06:05:41.007553    4268 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 06:05:41.009554    4268 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:05:41.013554    4268 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 06:05:41.018172    4268 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:05:41.020466    4268 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:05:41.023475    4268 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 06:05:41.023475    4268 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:05:41.199301    4268 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 06:05:41.203110    4268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:05:41.444620    4268 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-10 06:05:41.42593568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 06:05:41.449620    4268 out.go:179] * Using the docker driver based on existing profile
	I1210 06:05:41.451493    4268 start.go:309] selected driver: docker
	I1210 06:05:41.451493    4268 start.go:927] validating driver "docker" against &{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:05:41.451493    4268 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:05:41.457890    4268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:05:41.686631    4268 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-10 06:05:41.6698388 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Index
ServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Ex
pected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescrip
tion:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Program
Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 06:05:41.735496    4268 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:05:41.735496    4268 cni.go:84] Creating CNI manager for ""
	I1210 06:05:41.735496    4268 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 06:05:41.735496    4268 start.go:353] cluster config:
	{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:05:41.741018    4268 out.go:179] * Starting "functional-871500" primary control-plane node in "functional-871500" cluster
	I1210 06:05:41.744259    4268 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 06:05:41.749232    4268 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:05:41.752040    4268 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 06:05:41.752173    4268 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:05:41.752173    4268 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1210 06:05:41.752173    4268 cache.go:65] Caching tarball of preloaded images
	I1210 06:05:41.752485    4268 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1210 06:05:41.752621    4268 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1210 06:05:41.752768    4268 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\config.json ...
	I1210 06:05:41.832812    4268 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:05:41.832812    4268 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 06:05:41.832812    4268 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:05:41.832812    4268 start.go:360] acquireMachinesLock for functional-871500: {Name:mkaa7072cf669cebcb93feb3e66bf80897472d33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:05:41.832812    4268 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-871500"
	I1210 06:05:41.832812    4268 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:05:41.832812    4268 fix.go:54] fixHost starting: 
	I1210 06:05:41.839306    4268 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 06:05:41.895279    4268 fix.go:112] recreateIfNeeded on functional-871500: state=Running err=<nil>
	W1210 06:05:41.895279    4268 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:05:41.898650    4268 out.go:252] * Updating the running docker "functional-871500" container ...
	I1210 06:05:41.898650    4268 machine.go:94] provisionDockerMachine start ...
	I1210 06:05:41.901828    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:41.956991    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:41.957565    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:41.957565    4268 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:05:42.140179    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-871500
	
	I1210 06:05:42.140179    4268 ubuntu.go:182] provisioning hostname "functional-871500"
	I1210 06:05:42.144876    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:42.200094    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:42.200718    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:42.200718    4268 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-871500 && echo "functional-871500" | sudo tee /etc/hostname
	I1210 06:05:42.397029    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-871500
	
	I1210 06:05:42.400561    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:42.454568    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:42.455568    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:42.455568    4268 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-871500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-871500/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-871500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:05:42.650836    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:05:42.650836    4268 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 06:05:42.650836    4268 ubuntu.go:190] setting up certificates
	I1210 06:05:42.650836    4268 provision.go:84] configureAuth start
	I1210 06:05:42.655100    4268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 06:05:42.713113    4268 provision.go:143] copyHostCerts
	I1210 06:05:42.713113    4268 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 06:05:42.713113    4268 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 06:05:42.713113    4268 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 06:05:42.714114    4268 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 06:05:42.714114    4268 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 06:05:42.714114    4268 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 06:05:42.715113    4268 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 06:05:42.715113    4268 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 06:05:42.715113    4268 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 06:05:42.716114    4268 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-871500 san=[127.0.0.1 192.168.49.2 functional-871500 localhost minikube]
	I1210 06:05:42.798580    4268 provision.go:177] copyRemoteCerts
	I1210 06:05:42.802588    4268 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:05:42.805578    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:42.862278    4268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 06:05:42.996859    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:05:43.030822    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:05:43.062798    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:05:43.094379    4268 provision.go:87] duration metric: took 443.5373ms to configureAuth
	I1210 06:05:43.094426    4268 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:05:43.094529    4268 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 06:05:43.098320    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:43.157455    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:43.158049    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:43.158049    4268 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 06:05:43.340189    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 06:05:43.340189    4268 ubuntu.go:71] root file system type: overlay
	I1210 06:05:43.340189    4268 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 06:05:43.343620    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:43.397863    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:43.398871    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:43.398902    4268 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 06:05:43.595156    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 06:05:43.598799    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:43.653593    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:43.654604    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:43.654630    4268 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 06:05:43.838408    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:05:43.838408    4268 machine.go:97] duration metric: took 1.939733s to provisionDockerMachine
	I1210 06:05:43.838408    4268 start.go:293] postStartSetup for "functional-871500" (driver="docker")
	I1210 06:05:43.838408    4268 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:05:43.843330    4268 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:05:43.846525    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:43.900024    4268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 06:05:44.029680    4268 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:05:44.037541    4268 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:05:44.037541    4268 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:05:44.037541    4268 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 06:05:44.038189    4268 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 06:05:44.038189    4268 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 06:05:44.038757    4268 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts -> hosts in /etc/test/nested/copy/11304
	I1210 06:05:44.043153    4268 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11304
	I1210 06:05:44.055384    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 06:05:44.088733    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts --> /etc/test/nested/copy/11304/hosts (40 bytes)
	I1210 06:05:44.119280    4268 start.go:296] duration metric: took 280.8687ms for postStartSetup
	I1210 06:05:44.124009    4268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:05:44.126784    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:44.182044    4268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 06:05:44.316788    4268 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:05:44.324843    4268 fix.go:56] duration metric: took 2.4919994s for fixHost
	I1210 06:05:44.324843    4268 start.go:83] releasing machines lock for "functional-871500", held for 2.4919994s
	I1210 06:05:44.328923    4268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 06:05:44.381793    4268 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 06:05:44.385677    4268 ssh_runner.go:195] Run: cat /version.json
	I1210 06:05:44.386221    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:44.389012    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:44.441429    4268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 06:05:44.442469    4268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	W1210 06:05:44.560137    4268 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 06:05:44.563959    4268 ssh_runner.go:195] Run: systemctl --version
	I1210 06:05:44.577858    4268 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:05:44.589693    4268 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:05:44.594579    4268 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:05:44.610144    4268 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:05:44.610144    4268 start.go:496] detecting cgroup driver to use...
	I1210 06:05:44.610144    4268 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:05:44.610144    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:05:44.637889    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:05:44.661390    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:05:44.675857    4268 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:05:44.679682    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1210 06:05:44.688700    4268 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 06:05:44.688700    4268 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 06:05:44.703844    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:05:44.722937    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:05:44.745466    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:05:44.764651    4268 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:05:44.786058    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:05:44.803943    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:05:44.825767    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:05:44.844801    4268 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:05:44.865558    4268 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:05:44.882679    4268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:05:45.109626    4268 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:05:45.372410    4268 start.go:496] detecting cgroup driver to use...
	I1210 06:05:45.372488    4268 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:05:45.376725    4268 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 06:05:45.404975    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:05:45.427035    4268 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:05:45.453802    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:05:45.475732    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:05:45.493918    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:05:45.524028    4268 ssh_runner.go:195] Run: which cri-dockerd
	I1210 06:05:45.535197    4268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 06:05:45.548646    4268 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 06:05:45.572635    4268 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 06:05:45.724104    4268 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 06:05:45.868966    4268 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 06:05:45.869084    4268 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 06:05:45.901140    4268 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 06:05:45.921606    4268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:05:46.074547    4268 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 06:05:47.064088    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:05:47.086611    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 06:05:47.108595    4268 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1210 06:05:47.134813    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 06:05:47.157362    4268 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 06:05:47.294625    4268 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 06:05:47.445441    4268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:05:47.584076    4268 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 06:05:47.608696    4268 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 06:05:47.631875    4268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:05:47.796110    4268 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 06:05:47.918397    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 06:05:47.936744    4268 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 06:05:47.940567    4268 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 06:05:47.948674    4268 start.go:564] Will wait 60s for crictl version
	I1210 06:05:47.953390    4268 ssh_runner.go:195] Run: which crictl
	I1210 06:05:47.964351    4268 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:05:48.010041    4268 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 06:05:48.014800    4268 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 06:05:48.056120    4268 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 06:05:48.095316    4268 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.2 ...
	I1210 06:05:48.098689    4268 cli_runner.go:164] Run: docker exec -t functional-871500 dig +short host.docker.internal
	I1210 06:05:48.299568    4268 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 06:05:48.303921    4268 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 06:05:48.317690    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:48.374840    4268 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 06:05:48.377516    4268 kubeadm.go:884] updating cluster {Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:05:48.377840    4268 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 06:05:48.382038    4268 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 06:05:48.417200    4268 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-871500
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1210 06:05:48.417200    4268 docker.go:621] Images already preloaded, skipping extraction
	I1210 06:05:48.421745    4268 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 06:05:48.451984    4268 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-871500
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1210 06:05:48.451984    4268 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:05:48.451984    4268 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1210 06:05:48.451984    4268 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-871500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:05:48.455620    4268 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 06:05:48.856277    4268 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 06:05:48.856277    4268 cni.go:84] Creating CNI manager for ""
	I1210 06:05:48.856277    4268 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 06:05:48.856353    4268 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:05:48.856353    4268 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-871500 NodeName:functional-871500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:05:48.856531    4268 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-871500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:05:48.860333    4268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:05:48.875980    4268 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:05:48.881099    4268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:05:48.893740    4268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1210 06:05:48.914721    4268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:05:48.934821    4268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2073 bytes)
	I1210 06:05:48.960316    4268 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:05:48.972694    4268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:05:49.123118    4268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:05:49.255861    4268 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500 for IP: 192.168.49.2
	I1210 06:05:49.255861    4268 certs.go:195] generating shared ca certs ...
	I1210 06:05:49.255861    4268 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:05:49.256902    4268 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 06:05:49.257201    4268 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 06:05:49.257329    4268 certs.go:257] generating profile certs ...
	I1210 06:05:49.257955    4268 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\client.key
	I1210 06:05:49.257982    4268 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key.53a949a1
	I1210 06:05:49.257982    4268 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key
	I1210 06:05:49.259233    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 06:05:49.259785    4268 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 06:05:49.259886    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 06:05:49.260142    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 06:05:49.260323    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 06:05:49.260584    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 06:05:49.260858    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 06:05:49.261989    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:05:49.291586    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:05:49.322755    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:05:49.365403    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:05:49.393221    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:05:49.422952    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:05:49.452108    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:05:49.481059    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:05:49.509597    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 06:05:49.540303    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 06:05:49.570456    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:05:49.600563    4268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:05:49.625982    4268 ssh_runner.go:195] Run: openssl version
	I1210 06:05:49.646811    4268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 06:05:49.665986    4268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 06:05:49.688481    4268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 06:05:49.697316    4268 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 06:05:49.701997    4268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 06:05:49.756268    4268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:05:49.774475    4268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 06:05:49.792936    4268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 06:05:49.812585    4268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 06:05:49.820754    4268 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 06:05:49.824743    4268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 06:05:49.871530    4268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:05:49.889957    4268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:05:49.909516    4268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:05:49.930952    4268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:05:49.939674    4268 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:05:49.944280    4268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:05:49.991244    4268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:05:50.007593    4268 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:05:50.020119    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:05:50.067344    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:05:50.116460    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:05:50.165520    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:05:50.215057    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:05:50.263721    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:05:50.308021    4268 kubeadm.go:401] StartCluster: {Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:05:50.311614    4268 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 06:05:50.346733    4268 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:05:50.360552    4268 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:05:50.360580    4268 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:05:50.364548    4268 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:05:50.378578    4268 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:05:50.383414    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:50.435757    4268 kubeconfig.go:125] found "functional-871500" server: "https://127.0.0.1:50086"
	I1210 06:05:50.443021    4268 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:05:50.458083    4268 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 05:49:09.404233938 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 06:05:48.941571180 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 06:05:50.458083    4268 kubeadm.go:1161] stopping kube-system containers ...
	I1210 06:05:50.462114    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 06:05:50.496795    4268 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 06:05:50.522144    4268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:05:50.536445    4268 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 10 05:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 10 05:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 05:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 10 05:53 /etc/kubernetes/scheduler.conf
	
	I1210 06:05:50.540786    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:05:50.560978    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:05:50.573948    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:05:50.578606    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:05:50.598347    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:05:50.624166    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:05:50.628272    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:05:50.646130    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:05:50.660886    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:05:50.664931    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:05:50.683408    4268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:05:50.706370    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:05:50.943551    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:05:51.490493    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:05:51.736715    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:05:51.807636    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:05:51.910188    4268 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:05:51.914776    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:52.416327    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:52.915603    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:53.415591    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:53.915503    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:54.417765    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:54.915417    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:55.415417    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:55.915755    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:56.416253    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:56.915455    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:57.415861    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:57.915608    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:58.414964    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:58.916008    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:59.416023    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:59.916693    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:00.415637    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:00.915380    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:01.415701    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:01.915624    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:02.415007    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:02.915306    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:03.416586    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:03.916409    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:04.415626    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:04.916918    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:05.415662    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:05.915410    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:06.415782    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:06.915788    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:07.415237    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:07.915596    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:08.415151    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:08.915783    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:09.415452    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:09.915630    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:10.416137    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:10.915739    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:11.416340    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:11.916010    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:12.415711    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:12.915617    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:13.415590    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:13.916131    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:14.415833    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:14.915810    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:15.415434    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:15.916011    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:16.415715    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:16.916214    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:17.416569    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:17.915928    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:18.415760    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:18.915854    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:19.416023    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:19.915707    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:20.416022    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:20.915512    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:21.415449    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:21.915862    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:22.416187    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:22.915711    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:23.415407    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:23.916748    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:24.416067    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:24.915622    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:25.416460    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:25.916776    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:26.416986    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:26.915804    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:27.415924    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:27.915868    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:28.416289    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:28.915816    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:29.416455    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:29.916444    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:30.416956    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:30.917223    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:31.416570    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:31.916710    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:32.415252    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:32.916148    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:33.415760    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:33.915822    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:34.416279    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:34.915815    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:35.416215    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:35.916205    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:36.416507    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:36.915722    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:37.415763    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:37.915757    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:38.415942    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:38.915700    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:39.416506    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:39.915713    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:40.416558    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:40.916458    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:41.416738    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:41.916360    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:42.416858    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:42.916503    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:43.416468    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:43.915432    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:44.416286    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:44.915769    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:45.416376    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:45.916158    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:46.416260    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:46.916747    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:47.416302    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:47.915950    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:48.416456    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:48.916313    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:49.416114    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:49.916313    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:50.417029    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:50.916444    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:51.416929    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:51.915349    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:06:51.946488    4268 logs.go:282] 0 containers: []
	W1210 06:06:51.946488    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:06:51.950223    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:06:51.978835    4268 logs.go:282] 0 containers: []
	W1210 06:06:51.978835    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:06:51.982107    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:06:52.014720    4268 logs.go:282] 0 containers: []
	W1210 06:06:52.014720    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:06:52.018659    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:06:52.049849    4268 logs.go:282] 0 containers: []
	W1210 06:06:52.049849    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:06:52.053813    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:06:52.081237    4268 logs.go:282] 0 containers: []
	W1210 06:06:52.081237    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:06:52.085458    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:06:52.112058    4268 logs.go:282] 0 containers: []
	W1210 06:06:52.112058    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:06:52.115659    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:06:52.145147    4268 logs.go:282] 0 containers: []
	W1210 06:06:52.145147    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:06:52.145147    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:06:52.145147    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:06:52.208920    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:06:52.208920    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:06:52.238472    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:06:52.238472    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:06:52.325434    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:06:52.315654   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.316655   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.317934   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.318711   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.321223   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:06:52.315654   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.316655   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.317934   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.318711   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.321223   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:06:52.325434    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:06:52.325434    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:06:52.371108    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:06:52.371108    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:06:54.948530    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:54.972933    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:06:55.001036    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.001036    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:06:55.004290    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:06:55.032943    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.033029    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:06:55.036668    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:06:55.063474    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.063474    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:06:55.066822    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:06:55.095034    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.095034    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:06:55.098842    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:06:55.125575    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.125575    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:06:55.128696    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:06:55.158053    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.158053    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:06:55.161225    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:06:55.188975    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.188975    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:06:55.188975    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:06:55.188975    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:06:55.248739    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:06:55.248739    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:06:55.280459    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:06:55.280994    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:06:55.367741    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:06:55.357007   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.358211   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.360797   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.361943   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.363117   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:06:55.357007   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.358211   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.360797   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.361943   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.363117   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:06:55.367741    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:06:55.367741    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:06:55.414124    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:06:55.414124    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:06:57.973920    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:57.999748    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:06:58.030430    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.030430    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:06:58.034282    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:06:58.061116    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.061116    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:06:58.064723    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:06:58.091888    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.091888    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:06:58.095665    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:06:58.123935    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.123935    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:06:58.127445    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:06:58.154330    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.154330    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:06:58.157668    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:06:58.184825    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.184842    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:06:58.188704    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:06:58.215563    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.215563    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:06:58.215563    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:06:58.215563    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:06:58.279351    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:06:58.279351    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:06:58.309783    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:06:58.309783    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:06:58.393286    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:06:58.382107   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.383660   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.385217   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.386504   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.387262   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:06:58.382107   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.383660   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.385217   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.386504   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.387262   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:06:58.393286    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:06:58.393286    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:06:58.439058    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:06:58.439058    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:00.997523    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:01.021828    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:01.053542    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.053618    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:01.056677    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:01.085032    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.085032    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:01.088780    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:01.117302    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.117302    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:01.120752    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:01.148911    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.148911    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:01.152164    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:01.180119    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.180119    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:01.183696    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:01.213108    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.213108    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:01.216996    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:01.243946    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.243946    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:01.243946    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:01.243946    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:01.326430    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:01.314277   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.315265   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.319210   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.320225   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.321052   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:01.314277   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.315265   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.319210   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.320225   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.321052   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:01.326430    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:01.326459    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:01.370668    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:01.370668    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:01.422598    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:01.422598    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:01.484373    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:01.484373    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:04.021695    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:04.044749    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:04.073749    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.073749    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:04.077613    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:04.108271    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.108271    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:04.111712    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:04.140635    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.140635    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:04.143876    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:04.172340    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.172340    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:04.176392    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:04.202586    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.202586    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:04.207209    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:04.235404    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.235404    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:04.238669    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:04.269296    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.269296    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:04.269296    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:04.269296    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:04.333843    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:04.333843    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:04.363955    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:04.363955    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:04.444558    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:04.436237   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.437185   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.438566   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.439650   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.440909   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:04.436237   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.437185   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.438566   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.439650   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.440909   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:04.444558    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:04.445092    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:04.491255    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:04.491387    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:07.052134    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:07.075975    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:07.105912    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.105948    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:07.109453    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:07.138043    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.138043    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:07.141960    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:07.168363    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.168363    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:07.172168    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:07.199814    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.199814    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:07.204084    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:07.233711    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.233711    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:07.236936    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:07.264933    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.264933    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:07.268534    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:07.295981    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.295981    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:07.295981    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:07.295981    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:07.344067    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:07.344067    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:07.405677    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:07.405677    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:07.435735    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:07.435735    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:07.519926    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:07.510232   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.511256   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.513848   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.515885   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.517364   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:07.510232   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.511256   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.513848   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.515885   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.517364   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:07.519926    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:07.519926    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:10.070185    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:10.092250    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:10.122601    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.122601    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:10.128232    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:10.158544    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.158544    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:10.162689    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:10.190392    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.190392    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:10.194663    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:10.222107    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.222107    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:10.226125    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:10.252783    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.252783    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:10.256304    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:10.283397    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.283397    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:10.287203    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:10.315917    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.315961    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:10.315961    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:10.315997    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:10.379613    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:10.379613    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:10.413908    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:10.413937    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:10.494940    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:10.485289   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.486129   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.488300   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.489233   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.492215   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:10.485289   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.486129   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.488300   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.489233   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.492215   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:10.494940    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:10.494940    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:10.539292    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:10.539292    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:13.096499    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:13.120311    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:13.151343    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.151343    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:13.156101    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:13.187337    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.187337    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:13.190270    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:13.219411    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.219439    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:13.222798    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:13.249771    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.249771    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:13.253831    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:13.281375    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.281375    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:13.285787    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:13.313732    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.313732    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:13.317446    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:13.345700    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.345700    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:13.345700    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:13.345745    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:13.390315    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:13.390315    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:13.448999    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:13.448999    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:13.479056    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:13.479056    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:13.560071    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:13.549957   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.551004   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.553955   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.555549   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.557226   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:13.549957   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.551004   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.553955   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.555549   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.557226   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:13.560113    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:13.560113    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:16.115604    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:16.139172    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:16.166471    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.166471    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:16.169908    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:16.197926    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.197926    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:16.201554    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:16.228895    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.228895    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:16.233644    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:16.261634    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.261634    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:16.265293    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:16.290403    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.290403    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:16.294262    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:16.322219    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.322219    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:16.326037    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:16.354206    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.354206    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:16.354206    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:16.354206    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:16.419895    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:16.419895    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:16.451758    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:16.451758    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:16.530533    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:16.520075   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.522508   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.523655   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.525647   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.527182   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:16.520075   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.522508   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.523655   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.525647   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.527182   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:16.530563    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:16.530563    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:16.577832    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:16.577832    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:19.135824    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:19.161092    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:19.193445    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.193445    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:19.196612    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:19.224210    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.224263    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:19.227196    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:19.255555    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.255555    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:19.259039    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:19.288567    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.288567    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:19.292040    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:19.320589    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.320589    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:19.324658    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:19.351319    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.351319    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:19.355558    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:19.381847    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.381847    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:19.381847    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:19.381847    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:19.449609    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:19.449609    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:19.481141    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:19.481141    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:19.571805    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:19.560658   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.564410   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.566480   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.567250   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.569393   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:19.560658   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.564410   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.566480   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.567250   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.569393   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:19.571876    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:19.571876    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:19.618670    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:19.618670    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:22.172007    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:22.194631    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:22.223852    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.223852    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:22.227213    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:22.259065    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.259065    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:22.262548    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:22.294541    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.294541    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:22.297904    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:22.326231    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.326231    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:22.330450    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:22.355798    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.355798    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:22.359259    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:22.387519    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.387519    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:22.391049    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:22.418109    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.418109    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:22.418109    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:22.418109    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:22.499328    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:22.489790   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.490896   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.491903   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.494536   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.495501   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:22.489790   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.490896   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.491903   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.494536   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.495501   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:22.499328    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:22.499328    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:22.543726    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:22.543726    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:22.597115    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:22.597115    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:22.659436    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:22.659436    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:25.192803    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:25.217242    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:25.244925    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.244925    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:25.251081    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:25.278953    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.278953    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:25.282665    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:25.309347    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.309347    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:25.313377    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:25.341665    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.341665    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:25.345141    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:25.371901    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.371901    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:25.375742    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:25.403341    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.403365    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:25.406946    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:25.437008    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.437008    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:25.437008    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:25.437008    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:25.488060    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:25.488060    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:25.551490    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:25.551490    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:25.582172    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:25.582172    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:25.657523    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:25.647353   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.648357   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.649373   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.651014   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.652003   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:25.647353   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.648357   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.649373   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.651014   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.652003   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:25.657523    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:25.657523    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:28.209929    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:28.232843    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:28.261372    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.261372    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:28.265040    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:28.292477    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.292505    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:28.296009    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:28.320486    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.320486    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:28.324280    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:28.351296    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.351296    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:28.355074    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:28.390195    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.390195    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:28.394179    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:28.421613    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.421613    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:28.425545    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:28.453777    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.453777    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:28.453777    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:28.453777    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:28.499488    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:28.499488    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:28.561776    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:28.561776    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:28.593067    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:28.593112    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:28.668150    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:28.657513   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.658364   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.661163   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.662304   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.663565   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:28.657513   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.658364   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.661163   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.662304   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.663565   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:28.668150    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:28.668150    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:31.218151    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:31.240923    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:31.271844    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.271844    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:31.275477    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:31.301769    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.301769    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:31.305651    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:31.332406    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.332406    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:31.336005    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:31.363591    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.363591    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:31.366859    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:31.394594    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.394594    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:31.397901    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:31.427778    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.427801    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:31.431499    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:31.458018    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.458018    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:31.458052    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:31.458052    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:31.504698    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:31.504698    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:31.560046    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:31.560046    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:31.620436    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:31.620436    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:31.648931    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:31.648931    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:31.727951    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:31.718357   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.719615   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.720837   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.722218   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.723669   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:31.718357   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.719615   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.720837   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.722218   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.723669   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:34.232606    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:34.257055    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:34.288020    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.288020    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:34.291618    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:34.322496    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.322496    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:34.326328    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:34.354501    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.354501    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:34.358073    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:34.385199    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.385199    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:34.389140    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:34.414316    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.414316    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:34.418016    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:34.445073    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.445073    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:34.448529    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:34.479046    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.479046    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:34.479046    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:34.479113    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:34.540365    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:34.540365    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:34.571107    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:34.571107    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:34.651369    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:34.639849   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.640797   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.643867   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.644948   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.645803   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:34.639849   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.640797   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.643867   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.644948   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.645803   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:34.651369    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:34.651369    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:34.695236    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:34.695236    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:37.251178    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:37.274825    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:37.305218    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.305218    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:37.308994    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:37.338625    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.338625    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:37.342529    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:37.370849    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.370849    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:37.374620    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:37.403744    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.403744    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:37.407240    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:37.435170    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.435170    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:37.439347    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:37.464351    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.464351    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:37.468757    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:37.497371    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.497371    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:37.497371    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:37.497371    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:37.559564    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:37.559564    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:37.588662    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:37.588662    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:37.667884    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:37.657246   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.658358   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.659261   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.661714   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.662832   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:37.657246   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.658358   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.659261   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.661714   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.662832   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:37.667913    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:37.667913    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:37.713250    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:37.713250    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:40.270184    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:40.293820    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:40.321872    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.321872    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:40.325799    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:40.355617    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.355617    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:40.361421    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:40.389168    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.389168    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:40.393374    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:40.425493    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.425493    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:40.429344    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:40.458342    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.458342    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:40.462356    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:40.488885    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.488885    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:40.492942    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:40.521222    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.521222    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:40.521222    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:40.521222    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:40.571132    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:40.571132    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:40.622991    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:40.622991    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:40.680418    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:40.680418    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:40.710767    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:40.710767    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:40.786884    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:40.777278   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.778087   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.780838   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.781817   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.782760   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:40.777278   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.778087   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.780838   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.781817   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.782760   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:43.292302    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:43.316416    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:43.341307    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.341307    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:43.345027    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:43.370307    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.370307    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:43.374217    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:43.402135    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.402135    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:43.405647    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:43.433991    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.434045    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:43.437705    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:43.465221    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.465221    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:43.468945    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:43.494153    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.494153    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:43.497409    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:43.526559    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.526559    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:43.526559    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:43.526559    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:43.592034    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:43.592034    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:43.621625    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:43.621625    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:43.699225    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:43.688896   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.689744   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.691973   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.692804   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.695050   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:43.688896   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.689744   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.691973   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.692804   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.695050   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:43.699225    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:43.699225    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:43.742683    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:43.742683    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:46.296260    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:46.320038    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:46.350083    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.350127    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:46.354017    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:46.392667    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.392667    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:46.396040    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:46.423477    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.423477    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:46.427089    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:46.457044    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.457044    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:46.461309    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:46.492133    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.492133    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:46.496367    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:46.523683    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.523683    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:46.528125    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:46.556662    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.556662    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:46.556662    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:46.556662    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:46.622661    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:46.622661    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:46.653087    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:46.653087    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:46.737036    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:46.725117   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.726037   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.729627   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.731599   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.733777   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:46.725117   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.726037   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.729627   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.731599   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.733777   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:46.737036    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:46.737036    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:46.781873    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:46.781873    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:49.335832    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:49.359246    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:49.391481    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.391481    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:49.395372    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:49.425639    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.425639    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:49.429616    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:49.457273    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.457273    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:49.460755    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:49.490445    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.490445    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:49.496643    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:49.526292    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.526292    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:49.530371    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:49.557314    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.557359    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:49.561590    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:49.591753    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.591753    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:49.591753    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:49.591753    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:49.621767    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:49.621767    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:49.707223    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:49.697858   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.698899   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.699785   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.703604   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.704517   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:49.697858   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.698899   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.699785   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.703604   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.704517   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:49.707223    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:49.707223    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:49.751158    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:49.751158    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:49.799885    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:49.799885    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:52.366303    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:52.390862    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:52.425737    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.425770    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:52.429505    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:52.457550    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.457550    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:52.461709    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:52.488406    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.488406    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:52.492766    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:52.518703    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.518703    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:52.522666    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:52.550619    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.550619    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:52.554570    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:52.583512    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.583512    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:52.587153    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:52.614737    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.614737    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:52.614737    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:52.614811    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:52.677940    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:52.677940    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:52.709363    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:52.709363    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:52.791705    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:52.781560   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.782422   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.785208   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.786343   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.787080   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:52.781560   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.782422   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.785208   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.786343   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.787080   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:52.791705    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:52.791705    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:52.835266    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:52.835266    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:55.404989    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:55.433031    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:55.462583    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.462583    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:55.466139    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:55.492223    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.492223    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:55.495759    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:55.523357    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.523357    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:55.530265    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:55.561457    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.561457    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:55.565257    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:55.594178    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.594178    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:55.599162    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:55.627914    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.627914    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:55.632194    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:55.659551    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.659551    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:55.659551    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:55.659551    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:55.705228    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:55.705228    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:55.758018    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:55.758018    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:55.819730    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:55.819730    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:55.848800    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:55.848800    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:55.933602    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:55.919237   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.920249   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.924524   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.925340   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.926446   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:55.919237   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.920249   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.924524   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.925340   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.926446   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:58.439191    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:58.463828    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:58.497407    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.497407    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:58.500686    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:58.530436    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.530436    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:58.533685    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:58.561959    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.561959    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:58.566417    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:58.596302    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.596302    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:58.600866    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:58.629840    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.629840    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:58.633617    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:58.660127    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.660127    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:58.663612    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:58.692189    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.692189    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:58.692189    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:58.692189    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:58.754556    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:58.754556    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:58.784251    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:58.784251    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:58.866899    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:58.854125   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.855115   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.856391   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.857985   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.859051   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:58.854125   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.855115   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.856391   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.857985   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.859051   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:58.866899    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:58.866899    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:58.914793    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:58.914793    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:01.470823    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:01.494469    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:01.522381    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.522381    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:01.528647    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:01.558012    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.558012    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:01.564708    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:01.593835    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.593835    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:01.599056    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:01.623982    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.623982    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:01.627479    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:01.658260    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.658260    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:01.665836    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:01.697664    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.697664    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:01.702191    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:01.729816    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.729816    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:01.729816    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:01.729816    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:01.788909    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:01.788909    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:01.819503    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:01.819503    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:01.901569    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:01.889489   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.890512   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.891524   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.892377   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.894500   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:01.889489   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.890512   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.891524   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.892377   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.894500   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:01.901569    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:01.901569    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:01.947339    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:01.947339    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:04.502871    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:04.526200    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:04.558543    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.558543    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:04.563525    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:04.595332    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.595332    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:04.598770    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:04.630572    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.630572    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:04.635710    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:04.664369    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.664369    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:04.668951    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:04.699382    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.699382    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:04.702341    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:04.732274    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.732274    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:04.735620    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:04.763772    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.763772    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:04.763772    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:04.763866    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:04.790890    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:04.790890    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:04.872353    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:04.859391   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.860351   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.864058   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.865079   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.866076   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:04.859391   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.860351   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.864058   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.865079   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.866076   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:04.872353    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:04.872353    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:04.916959    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:04.916959    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:04.965485    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:04.965560    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:07.533039    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:07.559067    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:07.588219    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.588219    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:07.591689    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:07.619350    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.619350    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:07.622996    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:07.652464    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.652464    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:07.657960    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:07.688918    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.688918    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:07.692848    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:07.722521    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.722521    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:07.726603    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:07.755963    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.755963    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:07.760630    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:07.790252    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.790252    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:07.790252    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:07.790327    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:07.852838    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:07.852838    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:07.883838    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:07.883838    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:07.961862    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:07.950474   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.951452   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.952747   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.954027   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.955132   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:07.950474   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.951452   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.952747   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.954027   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.955132   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:07.961862    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:07.961862    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:08.003991    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:08.003991    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:10.563653    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:10.586319    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:10.613645    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.613645    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:10.617237    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:10.646795    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.646795    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:10.652694    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:10.683833    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.683833    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:10.688294    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:10.718409    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.718409    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:10.722444    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:10.746660    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.746660    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:10.751527    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:10.781904    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.781904    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:10.787205    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:10.814738    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.814738    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:10.814738    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:10.814792    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:10.841682    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:10.841682    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:10.922604    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:10.910990   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.911994   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.912519   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.915063   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.916345   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:10.910990   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.911994   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.912519   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.915063   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.916345   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:10.922639    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:10.922661    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:10.968300    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:10.968300    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:11.016711    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:11.016711    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:13.584862    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:13.607945    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:13.639757    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.639757    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:13.643362    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:13.673001    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.673001    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:13.676417    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:13.706241    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.706241    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:13.710040    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:13.735617    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.735840    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:13.738750    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:13.768821    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.768821    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:13.772175    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:13.801535    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.801535    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:13.805351    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:13.832881    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.832881    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:13.832881    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:13.832881    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:13.860208    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:13.860208    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:13.946278    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:13.935217   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.936421   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.937560   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.939101   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.940407   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:13.935217   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.936421   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.937560   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.939101   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.940407   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:13.946278    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:13.946278    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:13.991759    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:13.991759    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:14.045144    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:14.045144    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:16.612310    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:16.638180    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:16.667851    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.667851    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:16.671631    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:16.700699    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.700699    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:16.706277    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:16.734906    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.734906    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:16.738957    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:16.766394    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.766394    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:16.772893    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:16.802581    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.802581    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:16.808905    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:16.836566    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.836566    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:16.840142    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:16.868091    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.868091    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:16.868091    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:16.868091    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:16.897687    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:16.897687    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:16.975509    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:16.963204   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.964299   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.965894   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.966720   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.968954   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:16.963204   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.964299   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.965894   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.966720   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.968954   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:16.975509    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:16.975509    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:17.020453    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:17.020453    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:17.069748    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:17.069748    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:19.636799    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:19.659733    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:19.690968    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.690968    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:19.694619    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:19.722863    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.722863    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:19.726187    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:19.752031    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.752031    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:19.755396    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:19.783376    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.783376    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:19.786987    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:19.814219    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.814219    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:19.817751    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:19.847004    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.847004    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:19.850402    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:19.881752    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.881752    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:19.881752    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:19.881752    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:19.930019    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:19.930019    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:19.983089    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:19.983089    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:20.045802    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:20.045802    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:20.077460    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:20.077460    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:20.162436    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:20.151708   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.152740   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.154010   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.155291   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.156364   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:20.151708   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.152740   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.154010   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.155291   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.156364   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:22.668475    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:22.691439    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:22.721661    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.721661    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:22.725309    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:22.754031    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.754031    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:22.758027    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:22.785864    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.785864    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:22.789619    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:22.817384    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.817384    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:22.820727    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:22.851186    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.851186    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:22.855014    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:22.883476    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.883476    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:22.887734    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:22.914588    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.914588    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:22.914588    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:22.914588    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:22.977189    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:22.977189    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:23.007230    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:23.007230    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:23.085937    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:23.073621   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.076302   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.077595   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.078777   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.080139   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:23.073621   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.076302   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.077595   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.078777   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.080139   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:23.085937    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:23.085937    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:23.128830    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:23.128830    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:25.690109    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:25.713674    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:25.742134    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.742164    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:25.745613    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:25.771702    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.771789    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:25.775334    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:25.803239    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.803239    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:25.806686    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:25.836716    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.836716    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:25.840387    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:25.867927    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.867927    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:25.871435    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:25.898205    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.898205    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:25.901920    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:25.931569    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.931569    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:25.931569    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:25.931569    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:25.995604    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:25.995604    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:26.025733    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:26.025733    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:26.107058    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:26.094116   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.098292   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.099172   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.100188   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.101258   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:26.094116   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.098292   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.099172   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.100188   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.101258   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:26.107115    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:26.107115    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:26.150320    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:26.150320    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:28.710236    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:28.735443    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:28.764680    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.764680    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:28.768537    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:28.795455    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.795455    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:28.799570    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:28.826729    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.826729    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:28.830406    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:28.859191    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.859191    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:28.862919    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:28.888542    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.888542    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:28.892494    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:28.919951    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.919951    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:28.923351    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:28.952838    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.952838    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:28.952838    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:28.952909    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:29.034485    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:29.023348   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.024187   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.026875   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.028120   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.029114   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:29.023348   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.024187   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.026875   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.028120   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.029114   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:29.034485    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:29.034485    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:29.079092    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:29.079092    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:29.133555    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:29.133555    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:29.195221    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:29.195221    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:31.733591    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:31.757690    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:31.790674    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.790674    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:31.794674    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:31.825657    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.825721    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:31.829403    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:31.858023    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.858023    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:31.861500    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:31.890867    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.890914    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:31.894490    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:31.922953    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.922953    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:31.927186    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:31.954090    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.954090    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:31.957750    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:31.984886    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.984920    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:31.984920    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:31.984951    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:32.048671    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:32.048671    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:32.079259    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:32.079259    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:32.157323    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:32.146579   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.147719   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.148633   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.150758   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.151551   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:32.146579   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.147719   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.148633   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.150758   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.151551   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:32.157323    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:32.157323    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:32.203321    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:32.203321    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:34.760108    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:34.782876    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:34.810927    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.810927    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:34.814663    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:34.839714    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.839714    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:34.843722    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:34.870089    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.870089    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:34.873513    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:34.905367    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.905367    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:34.909301    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:34.938914    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.938914    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:34.942767    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:34.972329    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.972329    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:34.976046    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:35.000780    4268 logs.go:282] 0 containers: []
	W1210 06:08:35.000780    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:35.000780    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:35.000838    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:35.065353    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:35.065353    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:35.095634    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:35.095634    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:35.171365    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:35.160656   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.162343   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.163491   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.165073   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.166057   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:35.160656   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.162343   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.163491   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.165073   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.166057   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:35.171365    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:35.171365    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:35.215605    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:35.215605    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:37.774322    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:37.798677    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:37.827936    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.827990    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:37.831228    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:37.860987    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.861065    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:37.864478    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:37.891877    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.891877    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:37.895716    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:37.920808    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.920808    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:37.924309    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:37.952553    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.952553    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:37.956204    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:37.985826    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.985826    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:37.989201    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:38.017309    4268 logs.go:282] 0 containers: []
	W1210 06:08:38.017309    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:38.017309    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:38.017309    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:38.082876    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:38.083876    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:38.113796    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:38.113821    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:38.196088    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:38.184048   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.187012   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.188966   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.190400   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.191695   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:38.184048   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.187012   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.188966   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.190400   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.191695   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:38.196123    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:38.196149    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:38.241227    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:38.241227    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:40.798944    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:40.821450    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:40.850414    4268 logs.go:282] 0 containers: []
	W1210 06:08:40.850414    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:40.853927    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:40.881239    4268 logs.go:282] 0 containers: []
	W1210 06:08:40.881239    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:40.885281    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:40.912960    4268 logs.go:282] 0 containers: []
	W1210 06:08:40.912960    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:40.918840    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:40.950469    4268 logs.go:282] 0 containers: []
	W1210 06:08:40.950469    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:40.954401    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:40.982375    4268 logs.go:282] 0 containers: []
	W1210 06:08:40.982375    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:40.986123    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:41.016542    4268 logs.go:282] 0 containers: []
	W1210 06:08:41.016542    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:41.019622    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:41.049577    4268 logs.go:282] 0 containers: []
	W1210 06:08:41.049662    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:41.049662    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:41.049694    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:41.076753    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:41.076753    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:41.160411    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:41.148000   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.148852   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.151925   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.154289   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.155876   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:41.148000   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.148852   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.151925   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.154289   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.155876   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:41.160445    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:41.160473    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:41.206612    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:41.206612    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:41.253715    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:41.253715    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:43.821604    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:43.845650    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:43.874167    4268 logs.go:282] 0 containers: []
	W1210 06:08:43.874207    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:43.877812    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:43.905508    4268 logs.go:282] 0 containers: []
	W1210 06:08:43.905508    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:43.909372    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:43.939372    4268 logs.go:282] 0 containers: []
	W1210 06:08:43.939426    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:43.942841    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:43.972078    4268 logs.go:282] 0 containers: []
	W1210 06:08:43.972078    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:43.975697    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:44.002329    4268 logs.go:282] 0 containers: []
	W1210 06:08:44.002329    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:44.005898    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:44.035821    4268 logs.go:282] 0 containers: []
	W1210 06:08:44.035821    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:44.039602    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:44.066798    4268 logs.go:282] 0 containers: []
	W1210 06:08:44.066839    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:44.066839    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:44.066839    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:44.128660    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:44.128660    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:44.159235    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:44.159235    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:44.242361    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:44.231367   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.232316   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.235308   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.236181   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.238800   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:44.231367   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.232316   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.235308   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.236181   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.238800   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:44.242361    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:44.242361    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:44.289326    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:44.289326    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:46.852233    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:46.874656    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:46.903255    4268 logs.go:282] 0 containers: []
	W1210 06:08:46.903255    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:46.907117    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:46.935108    4268 logs.go:282] 0 containers: []
	W1210 06:08:46.935108    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:46.938584    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:46.967525    4268 logs.go:282] 0 containers: []
	W1210 06:08:46.967525    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:46.973772    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:47.001558    4268 logs.go:282] 0 containers: []
	W1210 06:08:47.001558    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:47.005083    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:47.034015    4268 logs.go:282] 0 containers: []
	W1210 06:08:47.034015    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:47.039271    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:47.068459    4268 logs.go:282] 0 containers: []
	W1210 06:08:47.068459    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:47.071981    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:47.102013    4268 logs.go:282] 0 containers: []
	W1210 06:08:47.102013    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:47.102044    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:47.102065    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:47.164592    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:47.164592    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:47.195491    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:47.195491    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:47.278044    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:47.265991   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.268610   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.269567   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.271904   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.272596   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:47.265991   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.268610   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.269567   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.271904   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.272596   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:47.278044    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:47.278044    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:47.324863    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:47.324863    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:49.880727    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:49.903789    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:49.935342    4268 logs.go:282] 0 containers: []
	W1210 06:08:49.935342    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:49.938737    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:49.965312    4268 logs.go:282] 0 containers: []
	W1210 06:08:49.965312    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:49.968607    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:49.996188    4268 logs.go:282] 0 containers: []
	W1210 06:08:49.996188    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:50.001257    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:50.027750    4268 logs.go:282] 0 containers: []
	W1210 06:08:50.027750    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:50.031128    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:50.062729    4268 logs.go:282] 0 containers: []
	W1210 06:08:50.062803    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:50.067118    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:50.095830    4268 logs.go:282] 0 containers: []
	W1210 06:08:50.095830    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:50.099864    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:50.130283    4268 logs.go:282] 0 containers: []
	W1210 06:08:50.130283    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:50.130283    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:50.130283    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:50.193360    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:50.193360    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:50.221703    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:50.221703    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:50.303176    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:50.293680   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.294854   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.296200   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.298483   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.299446   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:50.293680   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.294854   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.296200   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.298483   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.299446   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:50.303176    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:50.303176    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:50.370163    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:50.370163    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:52.928303    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:52.953491    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:52.981271    4268 logs.go:282] 0 containers: []
	W1210 06:08:52.981271    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:52.985316    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:53.013881    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.013881    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:53.017036    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:53.045261    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.045261    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:53.049312    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:53.077577    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.077577    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:53.080557    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:53.110750    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.110750    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:53.114132    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:53.141372    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.141372    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:53.145576    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:53.175705    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.175705    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:53.175705    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:53.175705    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:53.237519    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:53.237519    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:53.267260    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:53.267260    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:53.363780    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:53.355380   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.356544   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.357888   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.359124   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.360377   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:53.355380   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.356544   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.357888   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.359124   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.360377   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:53.363780    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:53.363780    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:53.409834    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:53.409834    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:55.976440    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:56.001300    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:56.033852    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.033852    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:56.037643    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:56.065934    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.065934    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:56.072377    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:56.102560    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.102560    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:56.106392    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:56.143025    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.143025    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:56.149239    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:56.176909    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.176909    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:56.180641    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:56.208166    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.208227    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:56.211221    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:56.240358    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.240358    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:56.240358    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:56.240358    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:56.303618    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:56.303618    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:56.333844    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:56.333844    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:56.416014    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:56.406081   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.406955   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.408179   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.409154   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.410395   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:56.406081   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.406955   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.408179   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.409154   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.410395   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:56.416014    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:56.416014    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:56.461496    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:56.461496    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:59.013428    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:59.038379    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:59.067727    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.067758    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:59.071379    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:59.104272    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.104272    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:59.107653    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:59.133866    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.133866    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:59.137442    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:59.164317    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.164317    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:59.168171    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:59.198264    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.198291    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:59.202014    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:59.229252    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.229252    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:59.233058    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:59.262804    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.262837    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:59.262837    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:59.262866    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:59.309986    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:59.309986    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:59.362017    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:59.362052    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:59.422749    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:59.422749    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:59.453982    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:59.453982    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:59.534843    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:59.524756   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.525914   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.526844   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.529305   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.530549   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:59.524756   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.525914   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.526844   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.529305   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.530549   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:02.039970    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:02.063736    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:02.094049    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.094049    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:02.097680    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:02.124934    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.124934    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:02.130724    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:02.158566    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.158566    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:02.162548    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:02.188736    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.188736    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:02.192205    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:02.222271    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.222271    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:02.225729    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:02.256473    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.256473    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:02.260671    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:02.287011    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.287011    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:02.287011    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:02.287011    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:02.392011    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:02.382734   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.383733   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.385038   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.386241   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.387283   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:02.382734   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.383733   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.385038   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.386241   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.387283   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:02.392011    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:02.392011    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:02.440008    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:02.440008    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:02.494764    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:02.494764    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:02.553322    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:02.553322    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:05.090291    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:05.112936    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:05.141630    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.141630    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:05.144882    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:05.180128    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.180128    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:05.184542    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:05.213219    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.213219    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:05.216935    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:05.244351    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.244351    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:05.248038    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:05.277710    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.277760    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:05.281504    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:05.310297    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.310297    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:05.314071    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:05.352094    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.352094    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:05.352094    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:05.352094    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:05.398783    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:05.398896    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:05.458685    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:05.458685    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:05.489319    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:05.489319    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:05.565657    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:05.556044   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.557996   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.559537   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.561579   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.562708   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:05.556044   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.557996   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.559537   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.561579   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.562708   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:05.565657    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:05.565657    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:08.115745    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:08.138736    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:08.171066    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.171066    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:08.174894    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:08.201941    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.201941    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:08.205547    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:08.233859    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.233859    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:08.237566    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:08.264996    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.264996    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:08.269259    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:08.294641    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.294641    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:08.298901    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:08.350200    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.350200    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:08.356240    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:08.383315    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.383315    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:08.383354    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:08.383372    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:08.448982    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:08.448982    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:08.479093    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:08.479093    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:08.560338    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:08.549727   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.550675   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.553111   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.554353   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.555159   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:08.549727   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.550675   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.553111   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.554353   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.555159   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:08.560338    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:08.560338    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:08.606173    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:08.606173    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:11.159744    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:11.183765    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:11.210674    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.210698    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:11.214341    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:11.240117    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.240117    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:11.243522    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:11.272551    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.272551    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:11.276401    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:11.305619    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.305619    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:11.309310    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:11.360405    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.360447    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:11.363925    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:11.393251    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.393251    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:11.397006    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:11.426962    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.426962    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:11.426962    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:11.426962    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:11.477327    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:11.477327    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:11.532161    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:11.532161    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:11.592212    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:11.592212    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:11.622686    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:11.622686    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:11.705726    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:11.693925   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.694871   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.698826   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.701149   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.702201   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:11.693925   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.694871   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.698826   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.701149   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.702201   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:14.210675    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:14.234399    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:14.264863    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.264863    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:14.268775    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:14.300413    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.300413    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:14.304487    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:14.346847    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.346847    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:14.350643    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:14.380435    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.380435    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:14.384376    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:14.412797    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.412797    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:14.416519    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:14.447397    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.447397    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:14.450969    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:14.478632    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.478695    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:14.478695    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:14.478695    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:14.528915    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:14.528915    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:14.588962    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:14.588962    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:14.618677    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:14.618677    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:14.700289    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:14.688765   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.691863   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.695446   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.696305   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.697431   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:14.688765   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.691863   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.695446   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.696305   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.697431   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:14.700289    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:14.700289    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:17.249092    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:17.272763    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:17.300862    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.300952    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:17.306099    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:17.346725    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.346725    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:17.350199    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:17.377982    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.377982    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:17.380998    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:17.409995    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.409995    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:17.414294    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:17.442988    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.442988    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:17.449120    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:17.475982    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.475982    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:17.479552    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:17.506308    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.506308    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:17.506308    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:17.506308    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:17.553141    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:17.553141    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:17.607169    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:17.607169    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:17.668742    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:17.668742    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:17.697789    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:17.697789    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:17.779510    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:17.770911   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.772114   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.773487   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.774333   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.776764   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:17.770911   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.772114   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.773487   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.774333   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.776764   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:20.283521    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:20.307295    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:20.338053    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.338053    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:20.341656    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:20.372543    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.372543    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:20.376481    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:20.403212    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.403212    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:20.406617    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:20.433422    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.433422    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:20.437081    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:20.465523    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.465523    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:20.469716    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:20.497769    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.497769    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:20.501184    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:20.528203    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.528203    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:20.528203    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:20.528203    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:20.604309    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:20.596677   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.597696   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.598827   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.599955   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.601237   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:20.596677   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.597696   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.598827   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.599955   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.601237   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:20.604309    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:20.604309    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:20.649121    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:20.649121    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:20.700336    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:20.700336    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:20.761156    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:20.761156    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:23.296453    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:23.318440    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:23.351977    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.351977    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:23.355449    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:23.384390    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.384413    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:23.387748    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:23.416613    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.416613    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:23.422740    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:23.447410    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.447410    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:23.450859    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:23.481298    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.481298    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:23.484812    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:23.510855    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.510855    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:23.514267    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:23.543042    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.543042    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:23.543042    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:23.543042    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:23.608264    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:23.608264    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:23.639456    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:23.639491    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:23.717275    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:23.706870   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.707871   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.711802   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.713025   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.715049   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:23.706870   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.707871   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.711802   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.713025   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.715049   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:23.717275    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:23.717319    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:23.761563    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:23.761563    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:26.321131    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:26.344893    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:26.376780    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.376780    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:26.380359    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:26.408268    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.408268    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:26.411660    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:26.440862    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.440862    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:26.444048    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:26.473546    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.473546    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:26.476599    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:26.505151    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.505151    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:26.508748    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:26.538121    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.538121    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:26.542550    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:26.569122    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.569122    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:26.569122    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:26.569122    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:26.629615    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:26.629615    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:26.660648    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:26.660648    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:26.741888    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:26.730118   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.731561   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.735001   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.735931   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.737367   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:26.730118   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.731561   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.735001   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.735931   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.737367   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:26.741888    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:26.741888    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:26.787954    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:26.787954    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:29.348252    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:29.372474    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:29.401265    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.401265    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:29.404730    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:29.435756    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.435805    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:29.439300    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:29.470279    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.470279    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:29.474091    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:29.502410    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.502410    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:29.505917    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:29.535595    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.535595    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:29.539532    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:29.568556    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.568556    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:29.572020    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:29.599739    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.599739    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:29.599739    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:29.599739    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:29.661483    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:29.661483    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:29.691565    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:29.691565    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:29.774718    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:29.764825   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.765629   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.768157   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.769097   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.770255   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:29.764825   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.765629   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.768157   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.769097   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.770255   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:29.774718    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:29.774718    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:29.816878    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:29.816878    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:32.374472    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:32.397027    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:32.429904    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.429904    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:32.433647    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:32.460698    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.460756    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:32.464368    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:32.491682    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.491682    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:32.495066    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:32.523531    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.523531    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:32.526773    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:32.557102    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.557102    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:32.563482    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:32.591959    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.591959    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:32.595725    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:32.625486    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.625486    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:32.625486    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:32.625486    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:32.688451    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:32.688451    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:32.719004    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:32.719004    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:32.800020    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:32.788607   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.789314   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.791558   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.792611   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.793305   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:32.788607   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.789314   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.791558   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.792611   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.793305   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:32.800020    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:32.800020    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:32.849061    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:32.849061    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:35.404633    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:35.429425    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:35.458232    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.458277    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:35.462316    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:35.489097    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.489097    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:35.492725    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:35.522979    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.522979    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:35.526587    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:35.555948    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.555948    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:35.559915    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:35.589220    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.589220    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:35.592883    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:35.619789    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.619850    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:35.622872    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:35.649510    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.649534    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:35.649534    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:35.649534    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:35.714882    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:35.715881    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:35.745666    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:35.745666    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:35.825749    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:35.812454   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.813402   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.819556   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.820578   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.821180   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:35.812454   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.813402   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.819556   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.820578   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.821180   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:35.825749    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:35.825749    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:35.871102    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:35.871102    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:38.430887    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:38.453030    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:38.484706    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.484706    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:38.488140    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:38.517210    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.517210    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:38.521162    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:38.549348    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.549348    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:38.553103    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:38.580109    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.580109    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:38.583794    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:38.613855    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.613934    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:38.618771    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:38.647097    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.647097    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:38.650932    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:38.680610    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.680610    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:38.680610    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:38.680682    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:38.758813    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:38.749300   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.750109   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.753125   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.753957   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.756268   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:38.749300   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.750109   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.753125   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.753957   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.756268   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:38.758813    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:38.758813    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:38.807873    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:38.807873    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:38.867039    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:38.867067    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:38.926759    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:38.926759    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:41.462739    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:41.490464    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:41.518622    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.518622    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:41.524470    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:41.551685    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.551685    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:41.556977    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:41.584962    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.584962    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:41.588808    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:41.620594    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.620594    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:41.624185    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:41.656800    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.656800    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:41.659821    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:41.692628    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.692628    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:41.696287    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:41.726090    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.726090    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:41.726090    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:41.726090    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:41.803427    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:41.793678   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.794849   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.796092   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.797004   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.799523   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:41.793678   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.794849   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.796092   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.797004   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.799523   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:41.803427    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:41.803427    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:41.849170    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:41.849170    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:41.903654    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:41.903654    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:41.962299    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:41.962299    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:44.500876    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:44.523403    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:44.554849    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.554849    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:44.558352    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:44.588012    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.588012    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:44.591883    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:44.617831    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.617831    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:44.621490    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:44.648689    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.648689    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:44.652490    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:44.684042    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.684042    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:44.687539    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:44.716817    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.716856    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:44.720738    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:44.747250    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.747250    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:44.747250    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:44.747318    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:44.798396    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:44.798396    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:44.858678    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:44.858678    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:44.888995    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:44.888995    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:44.964778    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:44.955796   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.956638   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.958906   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.960018   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.961253   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:44.955796   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.956638   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.958906   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.960018   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.961253   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:44.964778    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:44.964778    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:47.517925    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:47.541890    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:47.573716    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.573716    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:47.577684    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:47.606333    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.606333    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:47.610098    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:47.635733    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.635733    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:47.639327    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:47.669406    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.669406    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:47.673219    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:47.700633    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.700633    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:47.705121    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:47.733323    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.733323    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:47.737104    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:47.763071    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.763071    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:47.763071    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:47.763140    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:47.826821    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:47.826821    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:47.856590    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:47.856590    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:47.933339    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:47.922383   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.923323   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.927777   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.928818   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.930519   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:47.922383   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.923323   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.927777   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.928818   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.930519   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:47.933339    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:47.933339    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:47.979012    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:47.979012    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:50.532699    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:50.557240    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:50.585813    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.585813    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:50.589369    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:50.622124    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.622124    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:50.625576    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:50.650920    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.650920    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:50.653943    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:50.682545    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.682545    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:50.686340    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:50.715893    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.715893    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:50.719099    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:50.748297    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.748297    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:50.751451    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:50.779846    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.779866    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:50.779890    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:50.779890    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:50.830198    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:50.830198    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:50.891330    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:50.891330    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:50.921331    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:50.921331    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:51.001029    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:50.991827   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.992701   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.996634   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.997913   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.999128   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:50.991827   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.992701   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.996634   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.997913   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.999128   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:51.001029    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:51.001029    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:53.554507    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:53.573659    4268 kubeadm.go:602] duration metric: took 4m3.2099315s to restartPrimaryControlPlane
	W1210 06:09:53.573659    4268 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 06:09:53.578070    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1210 06:09:54.057699    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:09:54.081355    4268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:09:54.095306    4268 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:09:54.099578    4268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:09:54.113717    4268 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:09:54.113717    4268 kubeadm.go:158] found existing configuration files:
	
	I1210 06:09:54.118539    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:09:54.131350    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:09:54.135225    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:09:54.152710    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:09:54.164770    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:09:54.168898    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:09:54.185476    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:09:54.198490    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:09:54.202839    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:09:54.221180    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:09:54.234980    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:09:54.239197    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:09:54.256185    4268 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:09:54.367900    4268 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 06:09:54.450675    4268 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:09:54.549884    4268 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:13:55.304144    4268 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:13:55.304213    4268 kubeadm.go:319] 
	I1210 06:13:55.304353    4268 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:13:55.308106    4268 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:13:55.308252    4268 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:13:55.308389    4268 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:13:55.308682    4268 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 06:13:55.308682    4268 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 06:13:55.308682    4268 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 06:13:55.308682    4268 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 06:13:55.308682    4268 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 06:13:55.308682    4268 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 06:13:55.309221    4268 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 06:13:55.309347    4268 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 06:13:55.309347    4268 kubeadm.go:319] CONFIG_INET: enabled
	I1210 06:13:55.309347    4268 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 06:13:55.309347    4268 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 06:13:55.309347    4268 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 06:13:55.309881    4268 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 06:13:55.310536    4268 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 06:13:55.310642    4268 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 06:13:55.310721    4268 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 06:13:55.310721    4268 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 06:13:55.310721    4268 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 06:13:55.310721    4268 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 06:13:55.310721    4268 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 06:13:55.310721    4268 kubeadm.go:319] OS: Linux
	I1210 06:13:55.310721    4268 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:13:55.311254    4268 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:13:55.311367    4268 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:13:55.311538    4268 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:13:55.311670    4268 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:13:55.311750    4268 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:13:55.311824    4268 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:13:55.311865    4268 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:13:55.311865    4268 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:13:55.311865    4268 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:13:55.311865    4268 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:13:55.311865    4268 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:13:55.312446    4268 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:13:55.316886    4268 out.go:252]   - Generating certificates and keys ...
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:13:55.317855    4268 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:13:55.317855    4268 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:13:55.317855    4268 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:13:55.317855    4268 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:13:55.321599    4268 out.go:252]   - Booting up control plane ...
	I1210 06:13:55.322123    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:13:55.323161    4268 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:13:55.323161    4268 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:13:55.323161    4268 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000948554s
	I1210 06:13:55.323161    4268 kubeadm.go:319] 
	I1210 06:13:55.323161    4268 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:13:55.323161    4268 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:13:55.323161    4268 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:13:55.323161    4268 kubeadm.go:319] 
	I1210 06:13:55.323161    4268 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:13:55.323161    4268 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:13:55.324159    4268 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:13:55.324159    4268 kubeadm.go:319] 
	W1210 06:13:55.324159    4268 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000948554s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:13:55.329361    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1210 06:13:55.788774    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:13:55.807235    4268 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:13:55.812328    4268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:13:55.824166    4268 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:13:55.824166    4268 kubeadm.go:158] found existing configuration files:
	
	I1210 06:13:55.829624    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:13:55.842900    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:13:55.846743    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:13:55.863007    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:13:55.876646    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:13:55.881322    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:13:55.900836    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:13:55.916668    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:13:55.921481    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:13:55.939813    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:13:55.954759    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:13:55.960058    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:13:55.976998    4268 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:13:56.092783    4268 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 06:13:56.183907    4268 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:13:56.283504    4268 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:17:56.874768    4268 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:17:56.874768    4268 kubeadm.go:319] 
	I1210 06:17:56.875332    4268 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:17:56.883860    4268 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:17:56.883860    4268 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:17:56.883860    4268 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:17:56.883860    4268 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 06:17:56.884428    4268 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_INET: enabled
	I1210 06:17:56.884973    4268 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 06:17:56.885550    4268 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 06:17:56.885585    4268 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 06:17:56.885585    4268 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 06:17:56.885585    4268 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 06:17:56.885585    4268 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 06:17:56.885585    4268 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 06:17:56.886100    4268 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] OS: Linux
	I1210 06:17:56.886147    4268 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:17:56.886670    4268 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:17:56.887297    4268 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:17:56.887297    4268 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:17:56.887297    4268 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:17:56.890313    4268 out.go:252]   - Generating certificates and keys ...
	I1210 06:17:56.890313    4268 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:17:56.890313    4268 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:17:56.890313    4268 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:17:56.890313    4268 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:17:56.890917    4268 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:17:56.891009    4268 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:17:56.891147    4268 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:17:56.891147    4268 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:17:56.891147    4268 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:17:56.891147    4268 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:17:56.891147    4268 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:17:56.892230    4268 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:17:56.892299    4268 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:17:56.896667    4268 out.go:252]   - Booting up control plane ...
	I1210 06:17:56.896667    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:17:56.896667    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:17:56.897260    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:17:56.897260    4268 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:17:56.897260    4268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:17:56.897780    4268 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:17:56.897839    4268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:17:56.897839    4268 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:17:56.897839    4268 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:17:56.897839    4268 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:17:56.897839    4268 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00077699s
	I1210 06:17:56.897839    4268 kubeadm.go:319] 
	I1210 06:17:56.897839    4268 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:17:56.897839    4268 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:17:56.897839    4268 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:17:56.897839    4268 kubeadm.go:319] 
	I1210 06:17:56.898801    4268 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:17:56.898801    4268 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:17:56.898801    4268 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:17:56.898801    4268 kubeadm.go:319] 
	I1210 06:17:56.898801    4268 kubeadm.go:403] duration metric: took 12m6.5812244s to StartCluster
	I1210 06:17:56.898801    4268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:17:56.902808    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:17:57.138118    4268 cri.go:89] found id: ""
	I1210 06:17:57.138148    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.138172    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:17:57.138172    4268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:17:57.142698    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:17:57.185021    4268 cri.go:89] found id: ""
	I1210 06:17:57.185021    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.185021    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:17:57.185092    4268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:17:57.189241    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:17:57.228303    4268 cri.go:89] found id: ""
	I1210 06:17:57.228350    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.228350    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:17:57.228350    4268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:17:57.233381    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:17:57.304677    4268 cri.go:89] found id: ""
	I1210 06:17:57.304677    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.304677    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:17:57.304677    4268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:17:57.309206    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:17:57.355436    4268 cri.go:89] found id: ""
	I1210 06:17:57.355436    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.355436    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:17:57.355436    4268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:17:57.359252    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:17:57.404878    4268 cri.go:89] found id: ""
	I1210 06:17:57.404878    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.404878    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:17:57.404878    4268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:17:57.409876    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:17:57.451416    4268 cri.go:89] found id: ""
	I1210 06:17:57.451416    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.451499    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:17:57.451499    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:17:57.451499    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:17:57.506664    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:17:57.506764    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:17:57.578699    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:17:57.578699    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:17:57.610293    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:17:57.610293    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:17:57.852641    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:17:57.840732   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.841622   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.844268   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.845648   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.846764   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:17:57.840732   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.841622   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.844268   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.845648   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.846764   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:17:57.852641    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:17:57.852641    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1210 06:17:57.899832    4268 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077699s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:17:57.899832    4268 out.go:285] * 
	W1210 06:17:57.899832    4268 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077699s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:17:57.900356    4268 out.go:285] * 
	W1210 06:17:57.902683    4268 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:17:57.916933    4268 out.go:203] 
	W1210 06:17:57.920352    4268 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077699s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:17:57.920907    4268 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:17:57.921055    4268 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:17:57.924778    4268 out.go:203] 
	
	
	==> Docker <==
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939273296Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939278496Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939300298Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939330401Z" level=info msg="Initializing buildkit"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.048285619Z" level=info msg="Completed buildkit initialization"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057400499Z" level=info msg="Daemon has completed initialization"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057637121Z" level=info msg="API listen on [::]:2376"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057662524Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057681026Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 06:05:47 functional-871500 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 06:05:47 functional-871500 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 06:05:47 functional-871500 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 10 06:05:47 functional-871500 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 10 06:05:47 functional-871500 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Loaded network plugin cni"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 06:05:47 functional-871500 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:18:54.282378   41258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:18:54.283341   41258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:18:54.284304   41258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:18:54.285861   41258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:18:54.287096   41258 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000820] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000756] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000769] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000760] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001067] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 06:05] CPU: 0 PID: 66176 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000804] RIP: 0033:0x7faea69bcb20
	[  +0.000404] Code: Unable to access opcode bytes at RIP 0x7faea69bcaf6.
	[  +0.000646] RSP: 002b:00007ffe61c16590 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000914] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000859] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000854] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000785] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000766] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000758] FS:  0000000000000000 GS:  0000000000000000
	[  +0.894437] CPU: 10 PID: 66302 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000900] RIP: 0033:0x7fd9e8de1b20
	[  +0.000422] Code: Unable to access opcode bytes at RIP 0x7fd9e8de1af6.
	[  +0.000734] RSP: 002b:00007ffc83151e80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000839] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000834] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000828] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000825] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000826] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000826] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:18:54 up  1:47,  0 user,  load average: 0.10, 0.25, 0.42
	Linux functional-871500 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:18:50 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:18:51 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 392.
	Dec 10 06:18:51 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:18:51 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:18:51 functional-871500 kubelet[41100]: E1210 06:18:51.518832   41100 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:18:51 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:18:51 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:18:52 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 393.
	Dec 10 06:18:52 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:18:52 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:18:52 functional-871500 kubelet[41112]: E1210 06:18:52.297252   41112 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:18:52 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:18:52 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:18:52 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 394.
	Dec 10 06:18:52 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:18:52 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:18:53 functional-871500 kubelet[41140]: E1210 06:18:53.021814   41140 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:18:53 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:18:53 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:18:53 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 395.
	Dec 10 06:18:53 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:18:53 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:18:53 functional-871500 kubelet[41201]: E1210 06:18:53.770209   41201 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:18:53 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:18:53 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500: exit status 2 (619.4979ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-871500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (54.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (20.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-871500 apply -f testdata\invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-871500 apply -f testdata\invalidsvc.yaml: exit status 1 (20.203928s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\invalidsvc.yaml": error validating data: failed to download openapi: Get "https://127.0.0.1:50086/openapi/v2?timeout=32s": EOF; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-871500 apply -f testdata\invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (20.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (5.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-871500 status: exit status 2 (584.8688ms)

                                                
                                                
-- stdout --
	functional-871500
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-windows-amd64.exe -p functional-871500 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-871500 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (600.3669ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-windows-amd64.exe -p functional-871500 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-871500 status -o json: exit status 2 (586.8383ms)

                                                
                                                
-- stdout --
	{"Name":"functional-871500","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-windows-amd64.exe -p functional-871500 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-871500
helpers_test.go:244: (dbg) docker inspect functional-871500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8",
	        "Created": "2025-12-10T05:48:42.330122465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43983,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:48:42.614396787Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hosts",
	        "LogPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8-json.log",
	        "Name": "/functional-871500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-871500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-871500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-871500",
	                "Source": "/var/lib/docker/volumes/functional-871500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-871500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-871500",
	                "name.minikube.sigs.k8s.io": "functional-871500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f831997cbc8b0c45b2bf0420afea78f7ce282bcb79cb347c18a4cbe1cbe8de10",
	            "SandboxKey": "/var/run/docker/netns/f831997cbc8b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50082"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50083"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50085"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-871500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e1a843817431186dff999677a77b90e1cb216c0695cd37f7ec217b50cc77b815",
	                    "EndpointID": "c10c5ce0711796e291e0a729879c86799e1ac37c35b14f1d57f23910383e1c22",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-871500",
	                        "47edd36e1aff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500: exit status 2 (598.0444ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 logs -n 25: (1.2451864s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-871500 cache delete minikube-local-cache-test:functional-871500                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                         │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ list                                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo crictl images                                                                 │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo docker rmi registry.k8s.io/pause:latest                                       │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │                     │
	│ cache   │ functional-871500 cache reload                                                                           │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                         │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                      │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ kubectl │ functional-871500 kubectl -- --context functional-871500 get pods                                        │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │                     │
	│ start   │ -p functional-871500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:05 UTC │                     │
	│ addons  │ functional-871500 addons list                                                                            │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ config  │ functional-871500 config unset cpus                                                                      │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ config  │ functional-871500 config get cpus                                                                        │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ addons  │ functional-871500 addons list -o json                                                                    │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ config  │ functional-871500 config set cpus 2                                                                      │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ config  │ functional-871500 config get cpus                                                                        │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ config  │ functional-871500 config unset cpus                                                                      │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ config  │ functional-871500 config get cpus                                                                        │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ service │ functional-871500 service list                                                                           │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ service │ functional-871500 service list -o json                                                                   │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ service │ functional-871500 service --namespace=default --https --url hello-node                                   │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ service │ functional-871500 service hello-node --url --format={{.IP}}                                              │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ service │ functional-871500 service hello-node --url                                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:05:40
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:05:40.939558    4268 out.go:360] Setting OutFile to fd 1136 ...
	I1210 06:05:40.981558    4268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:05:40.981558    4268 out.go:374] Setting ErrFile to fd 1864...
	I1210 06:05:40.981558    4268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:05:40.994563    4268 out.go:368] Setting JSON to false
	I1210 06:05:40.997553    4268 start.go:133] hostinfo: {"hostname":"minikube4","uptime":5672,"bootTime":1765341068,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 06:05:40.997553    4268 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 06:05:41.001553    4268 out.go:179] * [functional-871500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 06:05:41.004553    4268 notify.go:221] Checking for updates...
	I1210 06:05:41.007553    4268 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 06:05:41.009554    4268 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:05:41.013554    4268 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 06:05:41.018172    4268 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:05:41.020466    4268 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:05:41.023475    4268 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 06:05:41.023475    4268 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:05:41.199301    4268 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 06:05:41.203110    4268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:05:41.444620    4268 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-10 06:05:41.42593568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 06:05:41.449620    4268 out.go:179] * Using the docker driver based on existing profile
	I1210 06:05:41.451493    4268 start.go:309] selected driver: docker
	I1210 06:05:41.451493    4268 start.go:927] validating driver "docker" against &{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:05:41.451493    4268 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:05:41.457890    4268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:05:41.686631    4268 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-10 06:05:41.6698388 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Index
ServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Ex
pected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescrip
tion:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Program
Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 06:05:41.735496    4268 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:05:41.735496    4268 cni.go:84] Creating CNI manager for ""
	I1210 06:05:41.735496    4268 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 06:05:41.735496    4268 start.go:353] cluster config:
	{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:05:41.741018    4268 out.go:179] * Starting "functional-871500" primary control-plane node in "functional-871500" cluster
	I1210 06:05:41.744259    4268 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 06:05:41.749232    4268 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:05:41.752040    4268 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 06:05:41.752173    4268 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:05:41.752173    4268 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1210 06:05:41.752173    4268 cache.go:65] Caching tarball of preloaded images
	I1210 06:05:41.752485    4268 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1210 06:05:41.752621    4268 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1210 06:05:41.752768    4268 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\config.json ...
	I1210 06:05:41.832812    4268 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:05:41.832812    4268 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 06:05:41.832812    4268 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:05:41.832812    4268 start.go:360] acquireMachinesLock for functional-871500: {Name:mkaa7072cf669cebcb93feb3e66bf80897472d33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:05:41.832812    4268 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-871500"
	I1210 06:05:41.832812    4268 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:05:41.832812    4268 fix.go:54] fixHost starting: 
	I1210 06:05:41.839306    4268 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 06:05:41.895279    4268 fix.go:112] recreateIfNeeded on functional-871500: state=Running err=<nil>
	W1210 06:05:41.895279    4268 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:05:41.898650    4268 out.go:252] * Updating the running docker "functional-871500" container ...
	I1210 06:05:41.898650    4268 machine.go:94] provisionDockerMachine start ...
	I1210 06:05:41.901828    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:41.956991    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:41.957565    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:41.957565    4268 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:05:42.140179    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-871500
	
	I1210 06:05:42.140179    4268 ubuntu.go:182] provisioning hostname "functional-871500"
	I1210 06:05:42.144876    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:42.200094    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:42.200718    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:42.200718    4268 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-871500 && echo "functional-871500" | sudo tee /etc/hostname
	I1210 06:05:42.397029    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-871500
	
	I1210 06:05:42.400561    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:42.454568    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:42.455568    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:42.455568    4268 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-871500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-871500/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-871500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:05:42.650836    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:05:42.650836    4268 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 06:05:42.650836    4268 ubuntu.go:190] setting up certificates
	I1210 06:05:42.650836    4268 provision.go:84] configureAuth start
	I1210 06:05:42.655100    4268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 06:05:42.713113    4268 provision.go:143] copyHostCerts
	I1210 06:05:42.713113    4268 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 06:05:42.713113    4268 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 06:05:42.713113    4268 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 06:05:42.714114    4268 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 06:05:42.714114    4268 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 06:05:42.714114    4268 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 06:05:42.715113    4268 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 06:05:42.715113    4268 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 06:05:42.715113    4268 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 06:05:42.716114    4268 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-871500 san=[127.0.0.1 192.168.49.2 functional-871500 localhost minikube]
	I1210 06:05:42.798580    4268 provision.go:177] copyRemoteCerts
	I1210 06:05:42.802588    4268 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:05:42.805578    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:42.862278    4268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 06:05:42.996859    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:05:43.030822    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:05:43.062798    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:05:43.094379    4268 provision.go:87] duration metric: took 443.5373ms to configureAuth
	I1210 06:05:43.094426    4268 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:05:43.094529    4268 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 06:05:43.098320    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:43.157455    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:43.158049    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:43.158049    4268 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 06:05:43.340189    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 06:05:43.340189    4268 ubuntu.go:71] root file system type: overlay
	I1210 06:05:43.340189    4268 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 06:05:43.343620    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:43.397863    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:43.398871    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:43.398902    4268 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 06:05:43.595156    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 06:05:43.598799    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:43.653593    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:43.654604    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:43.654630    4268 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 06:05:43.838408    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:05:43.838408    4268 machine.go:97] duration metric: took 1.939733s to provisionDockerMachine
	I1210 06:05:43.838408    4268 start.go:293] postStartSetup for "functional-871500" (driver="docker")
	I1210 06:05:43.838408    4268 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:05:43.843330    4268 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:05:43.846525    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:43.900024    4268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 06:05:44.029680    4268 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:05:44.037541    4268 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:05:44.037541    4268 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:05:44.037541    4268 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 06:05:44.038189    4268 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 06:05:44.038189    4268 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 06:05:44.038757    4268 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts -> hosts in /etc/test/nested/copy/11304
	I1210 06:05:44.043153    4268 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11304
	I1210 06:05:44.055384    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 06:05:44.088733    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts --> /etc/test/nested/copy/11304/hosts (40 bytes)
	I1210 06:05:44.119280    4268 start.go:296] duration metric: took 280.8687ms for postStartSetup
	I1210 06:05:44.124009    4268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:05:44.126784    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:44.182044    4268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 06:05:44.316788    4268 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:05:44.324843    4268 fix.go:56] duration metric: took 2.4919994s for fixHost
	I1210 06:05:44.324843    4268 start.go:83] releasing machines lock for "functional-871500", held for 2.4919994s
	I1210 06:05:44.328923    4268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 06:05:44.381793    4268 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 06:05:44.385677    4268 ssh_runner.go:195] Run: cat /version.json
	I1210 06:05:44.386221    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:44.389012    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:44.441429    4268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 06:05:44.442469    4268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	W1210 06:05:44.560137    4268 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 06:05:44.563959    4268 ssh_runner.go:195] Run: systemctl --version
	I1210 06:05:44.577858    4268 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:05:44.589693    4268 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:05:44.594579    4268 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:05:44.610144    4268 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:05:44.610144    4268 start.go:496] detecting cgroup driver to use...
	I1210 06:05:44.610144    4268 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:05:44.610144    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:05:44.637889    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:05:44.661390    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:05:44.675857    4268 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:05:44.679682    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1210 06:05:44.688700    4268 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 06:05:44.688700    4268 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 06:05:44.703844    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:05:44.722937    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:05:44.745466    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:05:44.764651    4268 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:05:44.786058    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:05:44.803943    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:05:44.825767    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:05:44.844801    4268 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:05:44.865558    4268 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:05:44.882679    4268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:05:45.109626    4268 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:05:45.372410    4268 start.go:496] detecting cgroup driver to use...
	I1210 06:05:45.372488    4268 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:05:45.376725    4268 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 06:05:45.404975    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:05:45.427035    4268 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:05:45.453802    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:05:45.475732    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:05:45.493918    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:05:45.524028    4268 ssh_runner.go:195] Run: which cri-dockerd
	I1210 06:05:45.535197    4268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 06:05:45.548646    4268 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 06:05:45.572635    4268 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 06:05:45.724104    4268 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 06:05:45.868966    4268 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 06:05:45.869084    4268 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 06:05:45.901140    4268 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 06:05:45.921606    4268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:05:46.074547    4268 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 06:05:47.064088    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:05:47.086611    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 06:05:47.108595    4268 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1210 06:05:47.134813    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 06:05:47.157362    4268 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 06:05:47.294625    4268 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 06:05:47.445441    4268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:05:47.584076    4268 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 06:05:47.608696    4268 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 06:05:47.631875    4268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:05:47.796110    4268 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 06:05:47.918397    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 06:05:47.936744    4268 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 06:05:47.940567    4268 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 06:05:47.948674    4268 start.go:564] Will wait 60s for crictl version
	I1210 06:05:47.953390    4268 ssh_runner.go:195] Run: which crictl
	I1210 06:05:47.964351    4268 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:05:48.010041    4268 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 06:05:48.014800    4268 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 06:05:48.056120    4268 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 06:05:48.095316    4268 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.2 ...
	I1210 06:05:48.098689    4268 cli_runner.go:164] Run: docker exec -t functional-871500 dig +short host.docker.internal
	I1210 06:05:48.299568    4268 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 06:05:48.303921    4268 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 06:05:48.317690    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:48.374840    4268 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 06:05:48.377516    4268 kubeadm.go:884] updating cluster {Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:05:48.377840    4268 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 06:05:48.382038    4268 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 06:05:48.417200    4268 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-871500
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1210 06:05:48.417200    4268 docker.go:621] Images already preloaded, skipping extraction
	I1210 06:05:48.421745    4268 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 06:05:48.451984    4268 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-871500
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1210 06:05:48.451984    4268 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:05:48.451984    4268 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1210 06:05:48.451984    4268 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-871500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:05:48.455620    4268 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 06:05:48.856277    4268 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 06:05:48.856277    4268 cni.go:84] Creating CNI manager for ""
	I1210 06:05:48.856277    4268 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 06:05:48.856353    4268 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:05:48.856353    4268 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-871500 NodeName:functional-871500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:05:48.856531    4268 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-871500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:05:48.860333    4268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:05:48.875980    4268 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:05:48.881099    4268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:05:48.893740    4268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1210 06:05:48.914721    4268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:05:48.934821    4268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2073 bytes)
	I1210 06:05:48.960316    4268 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:05:48.972694    4268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:05:49.123118    4268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:05:49.255861    4268 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500 for IP: 192.168.49.2
	I1210 06:05:49.255861    4268 certs.go:195] generating shared ca certs ...
	I1210 06:05:49.255861    4268 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:05:49.256902    4268 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 06:05:49.257201    4268 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 06:05:49.257329    4268 certs.go:257] generating profile certs ...
	I1210 06:05:49.257955    4268 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\client.key
	I1210 06:05:49.257982    4268 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key.53a949a1
	I1210 06:05:49.257982    4268 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key
	I1210 06:05:49.259233    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 06:05:49.259785    4268 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 06:05:49.259886    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 06:05:49.260142    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 06:05:49.260323    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 06:05:49.260584    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 06:05:49.260858    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 06:05:49.261989    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:05:49.291586    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:05:49.322755    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:05:49.365403    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:05:49.393221    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:05:49.422952    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:05:49.452108    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:05:49.481059    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:05:49.509597    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 06:05:49.540303    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 06:05:49.570456    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:05:49.600563    4268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:05:49.625982    4268 ssh_runner.go:195] Run: openssl version
	I1210 06:05:49.646811    4268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 06:05:49.665986    4268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 06:05:49.688481    4268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 06:05:49.697316    4268 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 06:05:49.701997    4268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 06:05:49.756268    4268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:05:49.774475    4268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 06:05:49.792936    4268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 06:05:49.812585    4268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 06:05:49.820754    4268 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 06:05:49.824743    4268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 06:05:49.871530    4268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:05:49.889957    4268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:05:49.909516    4268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:05:49.930952    4268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:05:49.939674    4268 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:05:49.944280    4268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:05:49.991244    4268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:05:50.007593    4268 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:05:50.020119    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:05:50.067344    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:05:50.116460    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:05:50.165520    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:05:50.215057    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:05:50.263721    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:05:50.308021    4268 kubeadm.go:401] StartCluster: {Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:05:50.311614    4268 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 06:05:50.346733    4268 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:05:50.360552    4268 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:05:50.360580    4268 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:05:50.364548    4268 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:05:50.378578    4268 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:05:50.383414    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:50.435757    4268 kubeconfig.go:125] found "functional-871500" server: "https://127.0.0.1:50086"
	I1210 06:05:50.443021    4268 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:05:50.458083    4268 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 05:49:09.404233938 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 06:05:48.941571180 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 06:05:50.458083    4268 kubeadm.go:1161] stopping kube-system containers ...
	I1210 06:05:50.462114    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 06:05:50.496795    4268 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 06:05:50.522144    4268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:05:50.536445    4268 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 10 05:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 10 05:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 05:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 10 05:53 /etc/kubernetes/scheduler.conf
	
	I1210 06:05:50.540786    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:05:50.560978    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:05:50.573948    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:05:50.578606    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:05:50.598347    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:05:50.624166    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:05:50.628272    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:05:50.646130    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:05:50.660886    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:05:50.664931    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:05:50.683408    4268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:05:50.706370    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:05:50.943551    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:05:51.490493    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:05:51.736715    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:05:51.807636    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:05:51.910188    4268 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:05:51.914776    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:52.416327    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:52.915603    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:53.415591    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:53.915503    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:54.417765    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:54.915417    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:55.415417    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:55.915755    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:56.416253    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:56.915455    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:57.415861    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:57.915608    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:58.414964    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:58.916008    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:59.416023    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:59.916693    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:00.415637    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:00.915380    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:01.415701    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:01.915624    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:02.415007    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:02.915306    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:03.416586    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:03.916409    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:04.415626    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:04.916918    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:05.415662    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:05.915410    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:06.415782    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:06.915788    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:07.415237    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:07.915596    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:08.415151    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:08.915783    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:09.415452    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:09.915630    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:10.416137    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:10.915739    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:11.416340    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:11.916010    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:12.415711    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:12.915617    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:13.415590    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:13.916131    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:14.415833    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:14.915810    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:15.415434    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:15.916011    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:16.415715    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:16.916214    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:17.416569    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:17.915928    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:18.415760    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:18.915854    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:19.416023    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:19.915707    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:20.416022    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:20.915512    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:21.415449    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:21.915862    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:22.416187    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:22.915711    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:23.415407    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:23.916748    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:24.416067    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:24.915622    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:25.416460    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:25.916776    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:26.416986    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:26.915804    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:27.415924    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:27.915868    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:28.416289    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:28.915816    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:29.416455    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:29.916444    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:30.416956    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:30.917223    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:31.416570    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:31.916710    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:32.415252    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:32.916148    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:33.415760    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:33.915822    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:34.416279    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:34.915815    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:35.416215    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:35.916205    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:36.416507    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:36.915722    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:37.415763    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:37.915757    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:38.415942    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:38.915700    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:39.416506    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:39.915713    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:40.416558    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:40.916458    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:41.416738    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:41.916360    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:42.416858    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:42.916503    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:43.416468    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:43.915432    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:44.416286    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:44.915769    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:45.416376    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:45.916158    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:46.416260    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:46.916747    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:47.416302    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:47.915950    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:48.416456    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:48.916313    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:49.416114    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:49.916313    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:50.417029    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:50.916444    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:51.416929    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:51.915349    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:06:51.946488    4268 logs.go:282] 0 containers: []
	W1210 06:06:51.946488    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:06:51.950223    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:06:51.978835    4268 logs.go:282] 0 containers: []
	W1210 06:06:51.978835    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:06:51.982107    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:06:52.014720    4268 logs.go:282] 0 containers: []
	W1210 06:06:52.014720    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:06:52.018659    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:06:52.049849    4268 logs.go:282] 0 containers: []
	W1210 06:06:52.049849    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:06:52.053813    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:06:52.081237    4268 logs.go:282] 0 containers: []
	W1210 06:06:52.081237    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:06:52.085458    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:06:52.112058    4268 logs.go:282] 0 containers: []
	W1210 06:06:52.112058    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:06:52.115659    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:06:52.145147    4268 logs.go:282] 0 containers: []
	W1210 06:06:52.145147    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:06:52.145147    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:06:52.145147    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:06:52.208920    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:06:52.208920    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:06:52.238472    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:06:52.238472    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:06:52.325434    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:06:52.315654   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.316655   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.317934   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.318711   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.321223   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:06:52.315654   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.316655   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.317934   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.318711   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.321223   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:06:52.325434    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:06:52.325434    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:06:52.371108    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:06:52.371108    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:06:54.948530    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:54.972933    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:06:55.001036    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.001036    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:06:55.004290    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:06:55.032943    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.033029    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:06:55.036668    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:06:55.063474    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.063474    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:06:55.066822    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:06:55.095034    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.095034    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:06:55.098842    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:06:55.125575    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.125575    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:06:55.128696    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:06:55.158053    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.158053    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:06:55.161225    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:06:55.188975    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.188975    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:06:55.188975    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:06:55.188975    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:06:55.248739    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:06:55.248739    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:06:55.280459    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:06:55.280994    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:06:55.367741    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:06:55.357007   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.358211   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.360797   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.361943   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.363117   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:06:55.357007   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.358211   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.360797   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.361943   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.363117   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:06:55.367741    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:06:55.367741    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:06:55.414124    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:06:55.414124    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:06:57.973920    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:57.999748    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:06:58.030430    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.030430    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:06:58.034282    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:06:58.061116    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.061116    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:06:58.064723    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:06:58.091888    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.091888    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:06:58.095665    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:06:58.123935    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.123935    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:06:58.127445    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:06:58.154330    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.154330    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:06:58.157668    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:06:58.184825    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.184842    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:06:58.188704    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:06:58.215563    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.215563    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:06:58.215563    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:06:58.215563    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:06:58.279351    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:06:58.279351    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:06:58.309783    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:06:58.309783    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:06:58.393286    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:06:58.382107   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.383660   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.385217   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.386504   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.387262   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:06:58.382107   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.383660   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.385217   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.386504   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.387262   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:06:58.393286    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:06:58.393286    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:06:58.439058    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:06:58.439058    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:00.997523    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:01.021828    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:01.053542    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.053618    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:01.056677    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:01.085032    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.085032    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:01.088780    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:01.117302    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.117302    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:01.120752    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:01.148911    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.148911    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:01.152164    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:01.180119    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.180119    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:01.183696    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:01.213108    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.213108    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:01.216996    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:01.243946    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.243946    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:01.243946    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:01.243946    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:01.326430    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:01.314277   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.315265   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.319210   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.320225   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.321052   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:01.314277   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.315265   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.319210   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.320225   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.321052   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:01.326430    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:01.326459    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:01.370668    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:01.370668    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:01.422598    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:01.422598    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:01.484373    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:01.484373    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:04.021695    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:04.044749    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:04.073749    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.073749    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:04.077613    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:04.108271    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.108271    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:04.111712    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:04.140635    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.140635    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:04.143876    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:04.172340    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.172340    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:04.176392    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:04.202586    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.202586    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:04.207209    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:04.235404    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.235404    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:04.238669    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:04.269296    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.269296    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:04.269296    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:04.269296    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:04.333843    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:04.333843    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:04.363955    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:04.363955    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:04.444558    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:04.436237   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.437185   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.438566   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.439650   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.440909   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:04.436237   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.437185   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.438566   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.439650   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.440909   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:04.444558    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:04.445092    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:04.491255    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:04.491387    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:07.052134    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:07.075975    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:07.105912    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.105948    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:07.109453    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:07.138043    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.138043    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:07.141960    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:07.168363    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.168363    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:07.172168    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:07.199814    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.199814    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:07.204084    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:07.233711    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.233711    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:07.236936    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:07.264933    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.264933    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:07.268534    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:07.295981    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.295981    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:07.295981    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:07.295981    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:07.344067    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:07.344067    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:07.405677    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:07.405677    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:07.435735    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:07.435735    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:07.519926    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:07.510232   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.511256   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.513848   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.515885   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.517364   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:07.510232   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.511256   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.513848   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.515885   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.517364   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:07.519926    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:07.519926    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:10.070185    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:10.092250    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:10.122601    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.122601    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:10.128232    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:10.158544    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.158544    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:10.162689    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:10.190392    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.190392    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:10.194663    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:10.222107    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.222107    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:10.226125    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:10.252783    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.252783    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:10.256304    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:10.283397    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.283397    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:10.287203    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:10.315917    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.315961    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:10.315961    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:10.315997    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:10.379613    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:10.379613    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:10.413908    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:10.413937    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:10.494940    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:10.485289   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.486129   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.488300   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.489233   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.492215   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:10.485289   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.486129   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.488300   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.489233   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.492215   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:10.494940    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:10.494940    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:10.539292    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:10.539292    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:13.096499    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:13.120311    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:13.151343    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.151343    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:13.156101    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:13.187337    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.187337    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:13.190270    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:13.219411    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.219439    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:13.222798    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:13.249771    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.249771    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:13.253831    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:13.281375    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.281375    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:13.285787    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:13.313732    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.313732    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:13.317446    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:13.345700    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.345700    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:13.345700    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:13.345745    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:13.390315    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:13.390315    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:13.448999    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:13.448999    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:13.479056    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:13.479056    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:13.560071    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:13.549957   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.551004   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.553955   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.555549   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.557226   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:13.549957   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.551004   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.553955   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.555549   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.557226   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:13.560113    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:13.560113    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:16.115604    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:16.139172    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:16.166471    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.166471    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:16.169908    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:16.197926    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.197926    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:16.201554    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:16.228895    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.228895    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:16.233644    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:16.261634    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.261634    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:16.265293    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:16.290403    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.290403    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:16.294262    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:16.322219    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.322219    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:16.326037    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:16.354206    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.354206    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:16.354206    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:16.354206    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:16.419895    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:16.419895    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:16.451758    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:16.451758    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:16.530533    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:16.520075   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.522508   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.523655   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.525647   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.527182   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:16.520075   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.522508   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.523655   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.525647   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.527182   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:16.530563    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:16.530563    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:16.577832    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:16.577832    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:19.135824    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:19.161092    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:19.193445    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.193445    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:19.196612    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:19.224210    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.224263    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:19.227196    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:19.255555    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.255555    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:19.259039    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:19.288567    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.288567    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:19.292040    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:19.320589    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.320589    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:19.324658    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:19.351319    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.351319    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:19.355558    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:19.381847    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.381847    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:19.381847    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:19.381847    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:19.449609    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:19.449609    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:19.481141    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:19.481141    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:19.571805    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:19.560658   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.564410   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.566480   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.567250   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.569393   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:19.560658   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.564410   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.566480   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.567250   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.569393   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:19.571876    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:19.571876    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:19.618670    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:19.618670    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:22.172007    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:22.194631    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:22.223852    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.223852    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:22.227213    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:22.259065    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.259065    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:22.262548    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:22.294541    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.294541    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:22.297904    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:22.326231    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.326231    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:22.330450    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:22.355798    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.355798    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:22.359259    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:22.387519    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.387519    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:22.391049    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:22.418109    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.418109    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:22.418109    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:22.418109    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:22.499328    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:22.489790   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.490896   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.491903   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.494536   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.495501   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:22.489790   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.490896   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.491903   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.494536   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.495501   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:22.499328    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:22.499328    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:22.543726    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:22.543726    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:22.597115    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:22.597115    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:22.659436    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:22.659436    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:25.192803    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:25.217242    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:25.244925    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.244925    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:25.251081    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:25.278953    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.278953    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:25.282665    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:25.309347    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.309347    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:25.313377    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:25.341665    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.341665    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:25.345141    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:25.371901    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.371901    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:25.375742    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:25.403341    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.403365    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:25.406946    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:25.437008    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.437008    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:25.437008    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:25.437008    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:25.488060    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:25.488060    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:25.551490    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:25.551490    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:25.582172    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:25.582172    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:25.657523    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:25.647353   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.648357   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.649373   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.651014   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.652003   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:25.647353   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.648357   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.649373   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.651014   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.652003   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:25.657523    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:25.657523    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:28.209929    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:28.232843    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:28.261372    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.261372    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:28.265040    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:28.292477    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.292505    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:28.296009    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:28.320486    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.320486    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:28.324280    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:28.351296    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.351296    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:28.355074    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:28.390195    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.390195    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:28.394179    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:28.421613    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.421613    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:28.425545    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:28.453777    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.453777    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:28.453777    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:28.453777    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:28.499488    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:28.499488    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:28.561776    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:28.561776    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:28.593067    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:28.593112    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:28.668150    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:28.657513   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.658364   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.661163   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.662304   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.663565   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:28.657513   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.658364   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.661163   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.662304   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.663565   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:28.668150    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:28.668150    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:31.218151    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:31.240923    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:31.271844    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.271844    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:31.275477    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:31.301769    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.301769    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:31.305651    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:31.332406    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.332406    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:31.336005    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:31.363591    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.363591    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:31.366859    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:31.394594    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.394594    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:31.397901    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:31.427778    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.427801    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:31.431499    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:31.458018    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.458018    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:31.458052    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:31.458052    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:31.504698    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:31.504698    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:31.560046    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:31.560046    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:31.620436    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:31.620436    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:31.648931    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:31.648931    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:31.727951    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:31.718357   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.719615   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.720837   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.722218   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.723669   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:31.718357   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.719615   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.720837   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.722218   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.723669   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:34.232606    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:34.257055    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:34.288020    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.288020    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:34.291618    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:34.322496    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.322496    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:34.326328    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:34.354501    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.354501    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:34.358073    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:34.385199    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.385199    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:34.389140    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:34.414316    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.414316    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:34.418016    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:34.445073    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.445073    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:34.448529    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:34.479046    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.479046    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:34.479046    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:34.479113    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:34.540365    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:34.540365    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:34.571107    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:34.571107    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:34.651369    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:34.639849   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.640797   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.643867   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.644948   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.645803   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:34.639849   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.640797   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.643867   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.644948   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.645803   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:34.651369    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:34.651369    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:34.695236    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:34.695236    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:37.251178    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:37.274825    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:37.305218    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.305218    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:37.308994    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:37.338625    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.338625    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:37.342529    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:37.370849    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.370849    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:37.374620    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:37.403744    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.403744    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:37.407240    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:37.435170    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.435170    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:37.439347    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:37.464351    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.464351    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:37.468757    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:37.497371    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.497371    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:37.497371    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:37.497371    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:37.559564    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:37.559564    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:37.588662    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:37.588662    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:37.667884    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:37.657246   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.658358   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.659261   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.661714   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.662832   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:37.657246   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.658358   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.659261   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.661714   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.662832   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:37.667913    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:37.667913    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:37.713250    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:37.713250    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:40.270184    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:40.293820    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:40.321872    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.321872    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:40.325799    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:40.355617    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.355617    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:40.361421    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:40.389168    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.389168    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:40.393374    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:40.425493    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.425493    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:40.429344    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:40.458342    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.458342    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:40.462356    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:40.488885    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.488885    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:40.492942    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:40.521222    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.521222    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:40.521222    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:40.521222    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:40.571132    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:40.571132    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:40.622991    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:40.622991    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:40.680418    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:40.680418    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:40.710767    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:40.710767    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:40.786884    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:40.777278   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.778087   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.780838   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.781817   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.782760   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:40.777278   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.778087   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.780838   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.781817   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.782760   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:43.292302    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:43.316416    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:43.341307    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.341307    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:43.345027    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:43.370307    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.370307    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:43.374217    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:43.402135    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.402135    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:43.405647    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:43.433991    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.434045    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:43.437705    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:43.465221    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.465221    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:43.468945    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:43.494153    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.494153    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:43.497409    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:43.526559    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.526559    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:43.526559    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:43.526559    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:43.592034    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:43.592034    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:43.621625    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:43.621625    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:43.699225    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:43.688896   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.689744   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.691973   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.692804   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.695050   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:43.688896   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.689744   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.691973   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.692804   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.695050   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:43.699225    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:43.699225    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:43.742683    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:43.742683    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:46.296260    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:46.320038    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:46.350083    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.350127    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:46.354017    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:46.392667    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.392667    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:46.396040    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:46.423477    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.423477    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:46.427089    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:46.457044    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.457044    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:46.461309    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:46.492133    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.492133    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:46.496367    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:46.523683    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.523683    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:46.528125    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:46.556662    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.556662    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:46.556662    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:46.556662    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:46.622661    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:46.622661    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:46.653087    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:46.653087    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:46.737036    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:46.725117   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.726037   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.729627   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.731599   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.733777   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:46.725117   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.726037   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.729627   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.731599   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.733777   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:46.737036    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:46.737036    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:46.781873    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:46.781873    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:49.335832    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:49.359246    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:49.391481    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.391481    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:49.395372    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:49.425639    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.425639    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:49.429616    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:49.457273    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.457273    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:49.460755    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:49.490445    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.490445    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:49.496643    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:49.526292    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.526292    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:49.530371    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:49.557314    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.557359    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:49.561590    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:49.591753    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.591753    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:49.591753    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:49.591753    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:49.621767    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:49.621767    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:49.707223    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:49.697858   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.698899   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.699785   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.703604   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.704517   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:49.697858   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.698899   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.699785   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.703604   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.704517   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:49.707223    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:49.707223    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:49.751158    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:49.751158    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:49.799885    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:49.799885    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:52.366303    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:52.390862    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:52.425737    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.425770    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:52.429505    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:52.457550    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.457550    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:52.461709    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:52.488406    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.488406    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:52.492766    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:52.518703    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.518703    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:52.522666    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:52.550619    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.550619    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:52.554570    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:52.583512    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.583512    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:52.587153    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:52.614737    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.614737    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:52.614737    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:52.614811    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:52.677940    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:52.677940    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:52.709363    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:52.709363    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:52.791705    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:52.781560   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.782422   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.785208   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.786343   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.787080   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:52.781560   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.782422   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.785208   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.786343   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.787080   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:52.791705    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:52.791705    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:52.835266    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:52.835266    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:55.404989    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:55.433031    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:55.462583    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.462583    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:55.466139    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:55.492223    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.492223    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:55.495759    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:55.523357    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.523357    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:55.530265    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:55.561457    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.561457    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:55.565257    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:55.594178    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.594178    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:55.599162    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:55.627914    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.627914    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:55.632194    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:55.659551    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.659551    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:55.659551    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:55.659551    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:55.705228    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:55.705228    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:55.758018    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:55.758018    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:55.819730    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:55.819730    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:55.848800    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:55.848800    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:55.933602    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:55.919237   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.920249   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.924524   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.925340   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.926446   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:55.919237   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.920249   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.924524   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.925340   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.926446   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:58.439191    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:58.463828    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:58.497407    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.497407    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:58.500686    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:58.530436    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.530436    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:58.533685    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:58.561959    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.561959    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:58.566417    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:58.596302    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.596302    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:58.600866    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:58.629840    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.629840    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:58.633617    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:58.660127    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.660127    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:58.663612    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:58.692189    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.692189    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:58.692189    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:58.692189    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:58.754556    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:58.754556    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:58.784251    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:58.784251    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:58.866899    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:58.854125   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.855115   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.856391   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.857985   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.859051   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:58.854125   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.855115   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.856391   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.857985   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.859051   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:58.866899    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:58.866899    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:58.914793    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:58.914793    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:01.470823    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:01.494469    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:01.522381    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.522381    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:01.528647    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:01.558012    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.558012    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:01.564708    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:01.593835    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.593835    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:01.599056    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:01.623982    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.623982    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:01.627479    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:01.658260    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.658260    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:01.665836    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:01.697664    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.697664    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:01.702191    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:01.729816    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.729816    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:01.729816    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:01.729816    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:01.788909    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:01.788909    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:01.819503    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:01.819503    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:01.901569    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:01.889489   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.890512   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.891524   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.892377   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.894500   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:01.889489   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.890512   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.891524   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.892377   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.894500   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:01.901569    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:01.901569    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:01.947339    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:01.947339    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:04.502871    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:04.526200    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:04.558543    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.558543    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:04.563525    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:04.595332    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.595332    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:04.598770    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:04.630572    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.630572    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:04.635710    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:04.664369    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.664369    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:04.668951    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:04.699382    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.699382    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:04.702341    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:04.732274    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.732274    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:04.735620    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:04.763772    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.763772    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:04.763772    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:04.763866    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:04.790890    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:04.790890    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:04.872353    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:04.859391   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.860351   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.864058   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.865079   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.866076   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:04.859391   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.860351   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.864058   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.865079   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.866076   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:04.872353    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:04.872353    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:04.916959    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:04.916959    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:04.965485    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:04.965560    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:07.533039    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:07.559067    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:07.588219    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.588219    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:07.591689    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:07.619350    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.619350    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:07.622996    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:07.652464    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.652464    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:07.657960    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:07.688918    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.688918    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:07.692848    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:07.722521    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.722521    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:07.726603    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:07.755963    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.755963    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:07.760630    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:07.790252    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.790252    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:07.790252    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:07.790327    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:07.852838    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:07.852838    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:07.883838    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:07.883838    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:07.961862    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:07.950474   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.951452   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.952747   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.954027   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.955132   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:07.950474   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.951452   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.952747   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.954027   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.955132   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:07.961862    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:07.961862    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:08.003991    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:08.003991    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:10.563653    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:10.586319    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:10.613645    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.613645    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:10.617237    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:10.646795    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.646795    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:10.652694    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:10.683833    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.683833    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:10.688294    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:10.718409    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.718409    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:10.722444    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:10.746660    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.746660    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:10.751527    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:10.781904    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.781904    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:10.787205    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:10.814738    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.814738    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:10.814738    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:10.814792    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:10.841682    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:10.841682    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:10.922604    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:10.910990   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.911994   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.912519   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.915063   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.916345   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:10.910990   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.911994   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.912519   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.915063   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.916345   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:10.922639    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:10.922661    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:10.968300    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:10.968300    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:11.016711    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:11.016711    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:13.584862    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:13.607945    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:13.639757    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.639757    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:13.643362    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:13.673001    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.673001    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:13.676417    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:13.706241    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.706241    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:13.710040    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:13.735617    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.735840    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:13.738750    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:13.768821    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.768821    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:13.772175    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:13.801535    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.801535    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:13.805351    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:13.832881    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.832881    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:13.832881    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:13.832881    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:13.860208    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:13.860208    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:13.946278    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:13.935217   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.936421   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.937560   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.939101   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.940407   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:13.935217   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.936421   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.937560   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.939101   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.940407   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:13.946278    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:13.946278    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:13.991759    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:13.991759    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:14.045144    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:14.045144    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:16.612310    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:16.638180    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:16.667851    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.667851    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:16.671631    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:16.700699    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.700699    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:16.706277    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:16.734906    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.734906    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:16.738957    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:16.766394    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.766394    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:16.772893    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:16.802581    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.802581    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:16.808905    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:16.836566    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.836566    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:16.840142    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:16.868091    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.868091    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:16.868091    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:16.868091    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:16.897687    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:16.897687    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:16.975509    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:16.963204   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.964299   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.965894   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.966720   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.968954   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:16.963204   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.964299   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.965894   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.966720   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.968954   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:16.975509    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:16.975509    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:17.020453    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:17.020453    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:17.069748    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:17.069748    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:19.636799    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:19.659733    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:19.690968    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.690968    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:19.694619    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:19.722863    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.722863    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:19.726187    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:19.752031    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.752031    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:19.755396    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:19.783376    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.783376    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:19.786987    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:19.814219    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.814219    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:19.817751    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:19.847004    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.847004    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:19.850402    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:19.881752    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.881752    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:19.881752    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:19.881752    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:19.930019    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:19.930019    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:19.983089    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:19.983089    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:20.045802    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:20.045802    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:20.077460    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:20.077460    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:20.162436    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:20.151708   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.152740   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.154010   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.155291   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.156364   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:20.151708   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.152740   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.154010   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.155291   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.156364   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:22.668475    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:22.691439    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:22.721661    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.721661    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:22.725309    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:22.754031    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.754031    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:22.758027    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:22.785864    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.785864    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:22.789619    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:22.817384    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.817384    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:22.820727    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:22.851186    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.851186    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:22.855014    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:22.883476    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.883476    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:22.887734    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:22.914588    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.914588    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:22.914588    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:22.914588    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:22.977189    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:22.977189    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:23.007230    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:23.007230    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:23.085937    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:23.073621   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.076302   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.077595   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.078777   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.080139   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:23.073621   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.076302   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.077595   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.078777   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.080139   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:23.085937    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:23.085937    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:23.128830    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:23.128830    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:25.690109    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:25.713674    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:25.742134    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.742164    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:25.745613    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:25.771702    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.771789    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:25.775334    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:25.803239    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.803239    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:25.806686    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:25.836716    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.836716    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:25.840387    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:25.867927    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.867927    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:25.871435    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:25.898205    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.898205    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:25.901920    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:25.931569    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.931569    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:25.931569    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:25.931569    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:25.995604    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:25.995604    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:26.025733    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:26.025733    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:26.107058    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:26.094116   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.098292   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.099172   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.100188   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.101258   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:26.094116   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.098292   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.099172   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.100188   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.101258   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:26.107115    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:26.107115    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:26.150320    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:26.150320    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:28.710236    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:28.735443    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:28.764680    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.764680    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:28.768537    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:28.795455    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.795455    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:28.799570    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:28.826729    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.826729    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:28.830406    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:28.859191    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.859191    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:28.862919    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:28.888542    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.888542    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:28.892494    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:28.919951    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.919951    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:28.923351    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:28.952838    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.952838    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:28.952838    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:28.952909    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:29.034485    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:29.023348   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.024187   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.026875   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.028120   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.029114   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:29.023348   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.024187   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.026875   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.028120   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.029114   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:29.034485    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:29.034485    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:29.079092    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:29.079092    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:29.133555    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:29.133555    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:29.195221    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:29.195221    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:31.733591    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:31.757690    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:31.790674    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.790674    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:31.794674    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:31.825657    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.825721    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:31.829403    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:31.858023    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.858023    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:31.861500    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:31.890867    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.890914    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:31.894490    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:31.922953    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.922953    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:31.927186    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:31.954090    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.954090    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:31.957750    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:31.984886    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.984920    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:31.984920    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:31.984951    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:32.048671    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:32.048671    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:32.079259    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:32.079259    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:32.157323    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:32.146579   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.147719   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.148633   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.150758   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.151551   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:32.146579   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.147719   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.148633   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.150758   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.151551   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:32.157323    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:32.157323    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:32.203321    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:32.203321    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:34.760108    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:34.782876    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:34.810927    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.810927    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:34.814663    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:34.839714    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.839714    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:34.843722    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:34.870089    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.870089    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:34.873513    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:34.905367    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.905367    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:34.909301    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:34.938914    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.938914    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:34.942767    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:34.972329    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.972329    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:34.976046    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:35.000780    4268 logs.go:282] 0 containers: []
	W1210 06:08:35.000780    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:35.000780    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:35.000838    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:35.065353    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:35.065353    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:35.095634    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:35.095634    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:35.171365    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:35.160656   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.162343   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.163491   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.165073   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.166057   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:35.160656   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.162343   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.163491   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.165073   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.166057   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:35.171365    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:35.171365    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:35.215605    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:35.215605    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:37.774322    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:37.798677    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:37.827936    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.827990    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:37.831228    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:37.860987    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.861065    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:37.864478    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:37.891877    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.891877    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:37.895716    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:37.920808    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.920808    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:37.924309    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:37.952553    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.952553    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:37.956204    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:37.985826    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.985826    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:37.989201    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:38.017309    4268 logs.go:282] 0 containers: []
	W1210 06:08:38.017309    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:38.017309    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:38.017309    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:38.082876    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:38.083876    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:38.113796    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:38.113821    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:38.196088    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:38.184048   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.187012   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.188966   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.190400   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.191695   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:38.184048   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.187012   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.188966   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.190400   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.191695   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:38.196123    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:38.196149    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:38.241227    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:38.241227    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:40.798944    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:40.821450    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:40.850414    4268 logs.go:282] 0 containers: []
	W1210 06:08:40.850414    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:40.853927    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:40.881239    4268 logs.go:282] 0 containers: []
	W1210 06:08:40.881239    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:40.885281    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:40.912960    4268 logs.go:282] 0 containers: []
	W1210 06:08:40.912960    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:40.918840    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:40.950469    4268 logs.go:282] 0 containers: []
	W1210 06:08:40.950469    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:40.954401    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:40.982375    4268 logs.go:282] 0 containers: []
	W1210 06:08:40.982375    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:40.986123    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:41.016542    4268 logs.go:282] 0 containers: []
	W1210 06:08:41.016542    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:41.019622    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:41.049577    4268 logs.go:282] 0 containers: []
	W1210 06:08:41.049662    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:41.049662    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:41.049694    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:41.076753    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:41.076753    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:41.160411    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:41.148000   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.148852   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.151925   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.154289   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.155876   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:41.148000   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.148852   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.151925   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.154289   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.155876   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:41.160445    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:41.160473    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:41.206612    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:41.206612    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:41.253715    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:41.253715    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:43.821604    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:43.845650    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:43.874167    4268 logs.go:282] 0 containers: []
	W1210 06:08:43.874207    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:43.877812    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:43.905508    4268 logs.go:282] 0 containers: []
	W1210 06:08:43.905508    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:43.909372    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:43.939372    4268 logs.go:282] 0 containers: []
	W1210 06:08:43.939426    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:43.942841    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:43.972078    4268 logs.go:282] 0 containers: []
	W1210 06:08:43.972078    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:43.975697    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:44.002329    4268 logs.go:282] 0 containers: []
	W1210 06:08:44.002329    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:44.005898    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:44.035821    4268 logs.go:282] 0 containers: []
	W1210 06:08:44.035821    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:44.039602    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:44.066798    4268 logs.go:282] 0 containers: []
	W1210 06:08:44.066839    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:44.066839    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:44.066839    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:44.128660    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:44.128660    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:44.159235    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:44.159235    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:44.242361    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:44.231367   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.232316   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.235308   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.236181   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.238800   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:44.231367   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.232316   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.235308   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.236181   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.238800   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:44.242361    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:44.242361    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:44.289326    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:44.289326    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:46.852233    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:46.874656    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:46.903255    4268 logs.go:282] 0 containers: []
	W1210 06:08:46.903255    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:46.907117    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:46.935108    4268 logs.go:282] 0 containers: []
	W1210 06:08:46.935108    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:46.938584    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:46.967525    4268 logs.go:282] 0 containers: []
	W1210 06:08:46.967525    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:46.973772    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:47.001558    4268 logs.go:282] 0 containers: []
	W1210 06:08:47.001558    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:47.005083    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:47.034015    4268 logs.go:282] 0 containers: []
	W1210 06:08:47.034015    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:47.039271    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:47.068459    4268 logs.go:282] 0 containers: []
	W1210 06:08:47.068459    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:47.071981    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:47.102013    4268 logs.go:282] 0 containers: []
	W1210 06:08:47.102013    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:47.102044    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:47.102065    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:47.164592    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:47.164592    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:47.195491    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:47.195491    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:47.278044    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:47.265991   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.268610   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.269567   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.271904   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.272596   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:47.265991   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.268610   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.269567   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.271904   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.272596   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:47.278044    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:47.278044    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:47.324863    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:47.324863    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:49.880727    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:49.903789    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:49.935342    4268 logs.go:282] 0 containers: []
	W1210 06:08:49.935342    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:49.938737    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:49.965312    4268 logs.go:282] 0 containers: []
	W1210 06:08:49.965312    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:49.968607    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:49.996188    4268 logs.go:282] 0 containers: []
	W1210 06:08:49.996188    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:50.001257    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:50.027750    4268 logs.go:282] 0 containers: []
	W1210 06:08:50.027750    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:50.031128    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:50.062729    4268 logs.go:282] 0 containers: []
	W1210 06:08:50.062803    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:50.067118    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:50.095830    4268 logs.go:282] 0 containers: []
	W1210 06:08:50.095830    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:50.099864    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:50.130283    4268 logs.go:282] 0 containers: []
	W1210 06:08:50.130283    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:50.130283    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:50.130283    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:50.193360    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:50.193360    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:50.221703    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:50.221703    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:50.303176    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:50.293680   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.294854   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.296200   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.298483   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.299446   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:50.293680   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.294854   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.296200   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.298483   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.299446   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:50.303176    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:50.303176    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:50.370163    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:50.370163    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:52.928303    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:52.953491    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:52.981271    4268 logs.go:282] 0 containers: []
	W1210 06:08:52.981271    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:52.985316    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:53.013881    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.013881    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:53.017036    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:53.045261    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.045261    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:53.049312    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:53.077577    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.077577    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:53.080557    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:53.110750    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.110750    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:53.114132    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:53.141372    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.141372    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:53.145576    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:53.175705    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.175705    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:53.175705    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:53.175705    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:53.237519    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:53.237519    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:53.267260    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:53.267260    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:53.363780    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:53.355380   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.356544   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.357888   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.359124   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.360377   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:53.355380   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.356544   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.357888   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.359124   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.360377   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:53.363780    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:53.363780    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:53.409834    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:53.409834    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:55.976440    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:56.001300    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:56.033852    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.033852    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:56.037643    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:56.065934    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.065934    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:56.072377    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:56.102560    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.102560    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:56.106392    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:56.143025    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.143025    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:56.149239    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:56.176909    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.176909    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:56.180641    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:56.208166    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.208227    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:56.211221    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:56.240358    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.240358    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:56.240358    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:56.240358    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:56.303618    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:56.303618    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:56.333844    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:56.333844    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:56.416014    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:56.406081   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.406955   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.408179   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.409154   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.410395   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:56.406081   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.406955   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.408179   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.409154   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.410395   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:56.416014    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:56.416014    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:56.461496    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:56.461496    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:59.013428    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:59.038379    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:59.067727    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.067758    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:59.071379    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:59.104272    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.104272    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:59.107653    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:59.133866    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.133866    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:59.137442    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:59.164317    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.164317    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:59.168171    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:59.198264    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.198291    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:59.202014    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:59.229252    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.229252    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:59.233058    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:59.262804    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.262837    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:59.262837    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:59.262866    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:59.309986    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:59.309986    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:59.362017    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:59.362052    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:59.422749    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:59.422749    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:59.453982    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:59.453982    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:59.534843    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:59.524756   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.525914   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.526844   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.529305   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.530549   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:59.524756   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.525914   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.526844   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.529305   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.530549   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:02.039970    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:02.063736    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:02.094049    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.094049    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:02.097680    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:02.124934    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.124934    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:02.130724    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:02.158566    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.158566    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:02.162548    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:02.188736    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.188736    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:02.192205    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:02.222271    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.222271    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:02.225729    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:02.256473    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.256473    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:02.260671    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:02.287011    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.287011    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:02.287011    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:02.287011    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:02.392011    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:02.382734   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.383733   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.385038   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.386241   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.387283   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:02.382734   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.383733   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.385038   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.386241   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.387283   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:02.392011    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:02.392011    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:02.440008    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:02.440008    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:02.494764    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:02.494764    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:02.553322    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:02.553322    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:05.090291    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:05.112936    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:05.141630    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.141630    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:05.144882    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:05.180128    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.180128    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:05.184542    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:05.213219    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.213219    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:05.216935    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:05.244351    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.244351    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:05.248038    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:05.277710    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.277760    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:05.281504    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:05.310297    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.310297    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:05.314071    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:05.352094    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.352094    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:05.352094    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:05.352094    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:05.398783    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:05.398896    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:05.458685    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:05.458685    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:05.489319    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:05.489319    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:05.565657    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:05.556044   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.557996   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.559537   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.561579   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.562708   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:05.556044   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.557996   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.559537   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.561579   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.562708   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:05.565657    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:05.565657    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:08.115745    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:08.138736    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:08.171066    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.171066    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:08.174894    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:08.201941    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.201941    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:08.205547    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:08.233859    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.233859    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:08.237566    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:08.264996    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.264996    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:08.269259    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:08.294641    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.294641    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:08.298901    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:08.350200    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.350200    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:08.356240    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:08.383315    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.383315    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:08.383354    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:08.383372    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:08.448982    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:08.448982    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:08.479093    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:08.479093    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:08.560338    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:08.549727   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.550675   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.553111   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.554353   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.555159   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:08.549727   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.550675   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.553111   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.554353   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.555159   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:08.560338    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:08.560338    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:08.606173    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:08.606173    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:11.159744    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:11.183765    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:11.210674    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.210698    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:11.214341    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:11.240117    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.240117    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:11.243522    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:11.272551    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.272551    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:11.276401    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:11.305619    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.305619    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:11.309310    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:11.360405    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.360447    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:11.363925    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:11.393251    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.393251    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:11.397006    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:11.426962    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.426962    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:11.426962    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:11.426962    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:11.477327    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:11.477327    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:11.532161    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:11.532161    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:11.592212    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:11.592212    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:11.622686    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:11.622686    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:11.705726    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:11.693925   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.694871   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.698826   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.701149   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.702201   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:11.693925   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.694871   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.698826   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.701149   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.702201   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:14.210675    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:14.234399    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:14.264863    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.264863    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:14.268775    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:14.300413    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.300413    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:14.304487    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:14.346847    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.346847    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:14.350643    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:14.380435    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.380435    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:14.384376    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:14.412797    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.412797    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:14.416519    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:14.447397    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.447397    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:14.450969    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:14.478632    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.478695    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:14.478695    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:14.478695    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:14.528915    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:14.528915    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:14.588962    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:14.588962    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:14.618677    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:14.618677    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:14.700289    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:14.688765   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.691863   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.695446   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.696305   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.697431   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:14.688765   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.691863   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.695446   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.696305   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.697431   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:14.700289    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:14.700289    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:17.249092    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:17.272763    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:17.300862    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.300952    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:17.306099    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:17.346725    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.346725    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:17.350199    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:17.377982    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.377982    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:17.380998    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:17.409995    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.409995    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:17.414294    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:17.442988    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.442988    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:17.449120    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:17.475982    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.475982    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:17.479552    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:17.506308    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.506308    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:17.506308    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:17.506308    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:17.553141    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:17.553141    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:17.607169    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:17.607169    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:17.668742    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:17.668742    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:17.697789    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:17.697789    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:17.779510    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:17.770911   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.772114   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.773487   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.774333   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.776764   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:17.770911   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.772114   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.773487   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.774333   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.776764   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:20.283521    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:20.307295    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:20.338053    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.338053    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:20.341656    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:20.372543    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.372543    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:20.376481    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:20.403212    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.403212    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:20.406617    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:20.433422    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.433422    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:20.437081    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:20.465523    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.465523    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:20.469716    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:20.497769    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.497769    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:20.501184    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:20.528203    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.528203    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:20.528203    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:20.528203    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:20.604309    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:20.596677   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.597696   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.598827   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.599955   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.601237   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:20.596677   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.597696   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.598827   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.599955   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.601237   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:20.604309    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:20.604309    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:20.649121    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:20.649121    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:20.700336    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:20.700336    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:20.761156    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:20.761156    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:23.296453    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:23.318440    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:23.351977    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.351977    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:23.355449    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:23.384390    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.384413    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:23.387748    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:23.416613    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.416613    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:23.422740    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:23.447410    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.447410    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:23.450859    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:23.481298    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.481298    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:23.484812    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:23.510855    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.510855    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:23.514267    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:23.543042    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.543042    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:23.543042    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:23.543042    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:23.608264    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:23.608264    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:23.639456    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:23.639491    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:23.717275    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:23.706870   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.707871   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.711802   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.713025   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.715049   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:23.706870   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.707871   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.711802   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.713025   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.715049   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:23.717275    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:23.717319    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:23.761563    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:23.761563    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:26.321131    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:26.344893    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:26.376780    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.376780    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:26.380359    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:26.408268    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.408268    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:26.411660    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:26.440862    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.440862    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:26.444048    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:26.473546    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.473546    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:26.476599    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:26.505151    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.505151    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:26.508748    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:26.538121    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.538121    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:26.542550    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:26.569122    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.569122    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:26.569122    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:26.569122    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:26.629615    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:26.629615    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:26.660648    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:26.660648    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:26.741888    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:26.730118   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.731561   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.735001   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.735931   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.737367   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:26.730118   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.731561   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.735001   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.735931   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.737367   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:26.741888    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:26.741888    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:26.787954    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:26.787954    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:29.348252    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:29.372474    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:29.401265    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.401265    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:29.404730    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:29.435756    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.435805    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:29.439300    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:29.470279    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.470279    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:29.474091    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:29.502410    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.502410    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:29.505917    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:29.535595    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.535595    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:29.539532    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:29.568556    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.568556    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:29.572020    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:29.599739    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.599739    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:29.599739    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:29.599739    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:29.661483    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:29.661483    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:29.691565    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:29.691565    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:29.774718    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:29.764825   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.765629   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.768157   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.769097   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.770255   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:29.764825   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.765629   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.768157   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.769097   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.770255   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:29.774718    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:29.774718    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:29.816878    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:29.816878    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:32.374472    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:32.397027    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:32.429904    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.429904    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:32.433647    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:32.460698    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.460756    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:32.464368    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:32.491682    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.491682    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:32.495066    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:32.523531    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.523531    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:32.526773    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:32.557102    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.557102    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:32.563482    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:32.591959    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.591959    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:32.595725    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:32.625486    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.625486    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:32.625486    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:32.625486    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:32.688451    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:32.688451    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:32.719004    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:32.719004    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:32.800020    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:32.788607   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.789314   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.791558   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.792611   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.793305   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:32.788607   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.789314   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.791558   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.792611   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.793305   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:32.800020    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:32.800020    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:32.849061    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:32.849061    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:35.404633    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:35.429425    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:35.458232    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.458277    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:35.462316    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:35.489097    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.489097    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:35.492725    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:35.522979    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.522979    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:35.526587    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:35.555948    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.555948    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:35.559915    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:35.589220    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.589220    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:35.592883    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:35.619789    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.619850    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:35.622872    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:35.649510    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.649534    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:35.649534    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:35.649534    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:35.714882    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:35.715881    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:35.745666    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:35.745666    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:35.825749    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:35.812454   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.813402   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.819556   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.820578   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.821180   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:35.812454   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.813402   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.819556   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.820578   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.821180   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:35.825749    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:35.825749    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:35.871102    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:35.871102    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:38.430887    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:38.453030    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:38.484706    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.484706    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:38.488140    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:38.517210    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.517210    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:38.521162    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:38.549348    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.549348    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:38.553103    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:38.580109    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.580109    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:38.583794    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:38.613855    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.613934    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:38.618771    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:38.647097    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.647097    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:38.650932    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:38.680610    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.680610    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:38.680610    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:38.680682    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:38.758813    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:38.749300   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.750109   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.753125   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.753957   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.756268   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:38.749300   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.750109   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.753125   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.753957   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.756268   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:38.758813    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:38.758813    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:38.807873    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:38.807873    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:38.867039    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:38.867067    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:38.926759    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:38.926759    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:41.462739    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:41.490464    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:41.518622    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.518622    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:41.524470    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:41.551685    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.551685    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:41.556977    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:41.584962    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.584962    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:41.588808    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:41.620594    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.620594    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:41.624185    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:41.656800    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.656800    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:41.659821    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:41.692628    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.692628    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:41.696287    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:41.726090    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.726090    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:41.726090    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:41.726090    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:41.803427    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:41.793678   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.794849   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.796092   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.797004   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.799523   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:41.793678   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.794849   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.796092   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.797004   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.799523   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:41.803427    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:41.803427    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:41.849170    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:41.849170    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:41.903654    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:41.903654    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:41.962299    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:41.962299    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:44.500876    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:44.523403    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:44.554849    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.554849    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:44.558352    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:44.588012    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.588012    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:44.591883    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:44.617831    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.617831    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:44.621490    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:44.648689    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.648689    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:44.652490    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:44.684042    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.684042    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:44.687539    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:44.716817    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.716856    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:44.720738    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:44.747250    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.747250    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:44.747250    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:44.747318    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:44.798396    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:44.798396    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:44.858678    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:44.858678    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:44.888995    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:44.888995    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:44.964778    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:44.955796   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.956638   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.958906   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.960018   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.961253   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:44.955796   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.956638   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.958906   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.960018   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.961253   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:44.964778    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:44.964778    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:47.517925    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:47.541890    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:47.573716    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.573716    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:47.577684    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:47.606333    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.606333    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:47.610098    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:47.635733    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.635733    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:47.639327    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:47.669406    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.669406    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:47.673219    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:47.700633    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.700633    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:47.705121    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:47.733323    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.733323    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:47.737104    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:47.763071    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.763071    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:47.763071    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:47.763140    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:47.826821    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:47.826821    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:47.856590    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:47.856590    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:47.933339    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:47.922383   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.923323   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.927777   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.928818   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.930519   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:47.922383   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.923323   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.927777   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.928818   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.930519   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:47.933339    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:47.933339    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:47.979012    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:47.979012    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:50.532699    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:50.557240    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:50.585813    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.585813    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:50.589369    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:50.622124    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.622124    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:50.625576    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:50.650920    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.650920    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:50.653943    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:50.682545    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.682545    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:50.686340    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:50.715893    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.715893    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:50.719099    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:50.748297    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.748297    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:50.751451    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:50.779846    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.779866    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:50.779890    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:50.779890    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:50.830198    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:50.830198    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:50.891330    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:50.891330    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:50.921331    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:50.921331    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:51.001029    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:50.991827   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.992701   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.996634   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.997913   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.999128   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:50.991827   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.992701   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.996634   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.997913   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.999128   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:51.001029    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:51.001029    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:53.554507    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:53.573659    4268 kubeadm.go:602] duration metric: took 4m3.2099315s to restartPrimaryControlPlane
	W1210 06:09:53.573659    4268 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 06:09:53.578070    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1210 06:09:54.057699    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:09:54.081355    4268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:09:54.095306    4268 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:09:54.099578    4268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:09:54.113717    4268 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:09:54.113717    4268 kubeadm.go:158] found existing configuration files:
	
	I1210 06:09:54.118539    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:09:54.131350    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:09:54.135225    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:09:54.152710    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:09:54.164770    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:09:54.168898    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:09:54.185476    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:09:54.198490    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:09:54.202839    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:09:54.221180    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:09:54.234980    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:09:54.239197    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:09:54.256185    4268 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:09:54.367900    4268 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 06:09:54.450675    4268 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:09:54.549884    4268 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:13:55.304144    4268 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:13:55.304213    4268 kubeadm.go:319] 
	I1210 06:13:55.304353    4268 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:13:55.308106    4268 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:13:55.308252    4268 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:13:55.308389    4268 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:13:55.308682    4268 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 06:13:55.308682    4268 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 06:13:55.308682    4268 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 06:13:55.308682    4268 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 06:13:55.308682    4268 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 06:13:55.308682    4268 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 06:13:55.309221    4268 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 06:13:55.309347    4268 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 06:13:55.309347    4268 kubeadm.go:319] CONFIG_INET: enabled
	I1210 06:13:55.309347    4268 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 06:13:55.309347    4268 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 06:13:55.309347    4268 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 06:13:55.309881    4268 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 06:13:55.310536    4268 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 06:13:55.310642    4268 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 06:13:55.310721    4268 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 06:13:55.310721    4268 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 06:13:55.310721    4268 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 06:13:55.310721    4268 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 06:13:55.310721    4268 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 06:13:55.310721    4268 kubeadm.go:319] OS: Linux
	I1210 06:13:55.310721    4268 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:13:55.311254    4268 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:13:55.311367    4268 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:13:55.311538    4268 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:13:55.311670    4268 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:13:55.311750    4268 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:13:55.311824    4268 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:13:55.311865    4268 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:13:55.311865    4268 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:13:55.311865    4268 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:13:55.311865    4268 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:13:55.311865    4268 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:13:55.312446    4268 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:13:55.316886    4268 out.go:252]   - Generating certificates and keys ...
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:13:55.317855    4268 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:13:55.317855    4268 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:13:55.317855    4268 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:13:55.317855    4268 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:13:55.321599    4268 out.go:252]   - Booting up control plane ...
	I1210 06:13:55.322123    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:13:55.323161    4268 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:13:55.323161    4268 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:13:55.323161    4268 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000948554s
	I1210 06:13:55.323161    4268 kubeadm.go:319] 
	I1210 06:13:55.323161    4268 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:13:55.323161    4268 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:13:55.323161    4268 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:13:55.323161    4268 kubeadm.go:319] 
	I1210 06:13:55.323161    4268 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:13:55.323161    4268 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:13:55.324159    4268 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:13:55.324159    4268 kubeadm.go:319] 
	W1210 06:13:55.324159    4268 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000948554s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:13:55.329361    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1210 06:13:55.788774    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:13:55.807235    4268 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:13:55.812328    4268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:13:55.824166    4268 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:13:55.824166    4268 kubeadm.go:158] found existing configuration files:
	
	I1210 06:13:55.829624    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:13:55.842900    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:13:55.846743    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:13:55.863007    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:13:55.876646    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:13:55.881322    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:13:55.900836    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:13:55.916668    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:13:55.921481    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:13:55.939813    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:13:55.954759    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:13:55.960058    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:13:55.976998    4268 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:13:56.092783    4268 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 06:13:56.183907    4268 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:13:56.283504    4268 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:17:56.874768    4268 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:17:56.874768    4268 kubeadm.go:319] 
	I1210 06:17:56.875332    4268 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:17:56.883860    4268 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:17:56.883860    4268 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:17:56.883860    4268 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:17:56.883860    4268 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 06:17:56.884428    4268 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_INET: enabled
	I1210 06:17:56.884973    4268 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 06:17:56.885550    4268 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 06:17:56.885585    4268 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 06:17:56.885585    4268 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 06:17:56.885585    4268 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 06:17:56.885585    4268 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 06:17:56.885585    4268 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 06:17:56.886100    4268 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] OS: Linux
	I1210 06:17:56.886147    4268 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:17:56.886670    4268 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:17:56.887297    4268 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:17:56.887297    4268 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:17:56.887297    4268 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:17:56.890313    4268 out.go:252]   - Generating certificates and keys ...
	I1210 06:17:56.890313    4268 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:17:56.890313    4268 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:17:56.890313    4268 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:17:56.890313    4268 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:17:56.890917    4268 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:17:56.891009    4268 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:17:56.891147    4268 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:17:56.891147    4268 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:17:56.891147    4268 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:17:56.891147    4268 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:17:56.891147    4268 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:17:56.892230    4268 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:17:56.892299    4268 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:17:56.896667    4268 out.go:252]   - Booting up control plane ...
	I1210 06:17:56.896667    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:17:56.896667    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:17:56.897260    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:17:56.897260    4268 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:17:56.897260    4268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:17:56.897780    4268 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:17:56.897839    4268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:17:56.897839    4268 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:17:56.897839    4268 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:17:56.897839    4268 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:17:56.897839    4268 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00077699s
	I1210 06:17:56.897839    4268 kubeadm.go:319] 
	I1210 06:17:56.897839    4268 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:17:56.897839    4268 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:17:56.897839    4268 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:17:56.897839    4268 kubeadm.go:319] 
	I1210 06:17:56.898801    4268 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:17:56.898801    4268 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:17:56.898801    4268 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:17:56.898801    4268 kubeadm.go:319] 
	I1210 06:17:56.898801    4268 kubeadm.go:403] duration metric: took 12m6.5812244s to StartCluster
	I1210 06:17:56.898801    4268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:17:56.902808    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:17:57.138118    4268 cri.go:89] found id: ""
	I1210 06:17:57.138148    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.138172    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:17:57.138172    4268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:17:57.142698    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:17:57.185021    4268 cri.go:89] found id: ""
	I1210 06:17:57.185021    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.185021    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:17:57.185092    4268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:17:57.189241    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:17:57.228303    4268 cri.go:89] found id: ""
	I1210 06:17:57.228350    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.228350    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:17:57.228350    4268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:17:57.233381    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:17:57.304677    4268 cri.go:89] found id: ""
	I1210 06:17:57.304677    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.304677    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:17:57.304677    4268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:17:57.309206    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:17:57.355436    4268 cri.go:89] found id: ""
	I1210 06:17:57.355436    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.355436    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:17:57.355436    4268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:17:57.359252    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:17:57.404878    4268 cri.go:89] found id: ""
	I1210 06:17:57.404878    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.404878    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:17:57.404878    4268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:17:57.409876    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:17:57.451416    4268 cri.go:89] found id: ""
	I1210 06:17:57.451416    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.451499    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:17:57.451499    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:17:57.451499    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:17:57.506664    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:17:57.506764    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:17:57.578699    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:17:57.578699    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:17:57.610293    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:17:57.610293    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:17:57.852641    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:17:57.840732   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.841622   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.844268   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.845648   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.846764   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:17:57.840732   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.841622   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.844268   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.845648   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.846764   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:17:57.852641    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:17:57.852641    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1210 06:17:57.899832    4268 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077699s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:17:57.899832    4268 out.go:285] * 
	W1210 06:17:57.899832    4268 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077699s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:17:57.900356    4268 out.go:285] * 
	W1210 06:17:57.902683    4268 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:17:57.916933    4268 out.go:203] 
	W1210 06:17:57.920352    4268 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077699s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:17:57.920907    4268 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:17:57.921055    4268 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:17:57.924778    4268 out.go:203] 
	
	
	==> Docker <==
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939273296Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939278496Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939300298Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939330401Z" level=info msg="Initializing buildkit"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.048285619Z" level=info msg="Completed buildkit initialization"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057400499Z" level=info msg="Daemon has completed initialization"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057637121Z" level=info msg="API listen on [::]:2376"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057662524Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057681026Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 06:05:47 functional-871500 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 06:05:47 functional-871500 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 06:05:47 functional-871500 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 10 06:05:47 functional-871500 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 10 06:05:47 functional-871500 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Loaded network plugin cni"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 06:05:47 functional-871500 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:20:19.350099   43296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:20:19.350827   43296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:20:19.353153   43296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:20:19.353938   43296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:20:19.356108   43296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000820] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000756] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000769] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000760] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001067] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 06:05] CPU: 0 PID: 66176 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000804] RIP: 0033:0x7faea69bcb20
	[  +0.000404] Code: Unable to access opcode bytes at RIP 0x7faea69bcaf6.
	[  +0.000646] RSP: 002b:00007ffe61c16590 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000914] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000859] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000854] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000785] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000766] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000758] FS:  0000000000000000 GS:  0000000000000000
	[  +0.894437] CPU: 10 PID: 66302 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000900] RIP: 0033:0x7fd9e8de1b20
	[  +0.000422] Code: Unable to access opcode bytes at RIP 0x7fd9e8de1af6.
	[  +0.000734] RSP: 002b:00007ffc83151e80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000839] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000834] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000828] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000825] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000826] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000826] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:20:19 up  1:48,  0 user,  load average: 0.38, 0.30, 0.42
	Linux functional-871500 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:20:16 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:16 functional-871500 kubelet[43107]: E1210 06:20:16.508143   43107 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:20:16 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:20:16 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:20:17 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 506.
	Dec 10 06:20:17 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:17 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:17 functional-871500 kubelet[43134]: E1210 06:20:17.278022   43134 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:20:17 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:20:17 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:20:17 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 507.
	Dec 10 06:20:17 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:17 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:18 functional-871500 kubelet[43163]: E1210 06:20:18.018240   43163 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:20:18 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:20:18 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:20:18 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 508.
	Dec 10 06:20:18 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:18 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:18 functional-871500 kubelet[43190]: E1210 06:20:18.781350   43190 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:20:18 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:20:18 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:20:19 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 509.
	Dec 10 06:20:19 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:19 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500: exit status 2 (599.3655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-871500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (5.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (122.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-871500 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-871500 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (95.2042ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://127.0.0.1:50086/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": EOF

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-871500 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-871500 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-871500 describe po hello-node-connect: exit status 1 (50.3494495s)

                                                
                                                
** stderr ** 
	E1210 06:19:29.559965   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:39.647359   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:49.689020   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:59.731360   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:20:09.770019   12264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-871500 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-871500 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-871500 logs -l app=hello-node-connect: exit status 1 (40.3127472s)

                                                
                                                
** stderr ** 
	E1210 06:20:19.908445    3484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:20:30.006728    3484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:20:40.045677    3484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:20:50.087646    3484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-871500 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-871500 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-871500 describe svc hello-node-connect: exit status 1 (29.3293015s)

                                                
                                                
** stderr ** 
	E1210 06:21:00.223188   13604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:21:10.307015   13604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-871500 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-871500
helpers_test.go:244: (dbg) docker inspect functional-871500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8",
	        "Created": "2025-12-10T05:48:42.330122465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43983,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:48:42.614396787Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hosts",
	        "LogPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8-json.log",
	        "Name": "/functional-871500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-871500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-871500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-871500",
	                "Source": "/var/lib/docker/volumes/functional-871500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-871500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-871500",
	                "name.minikube.sigs.k8s.io": "functional-871500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f831997cbc8b0c45b2bf0420afea78f7ce282bcb79cb347c18a4cbe1cbe8de10",
	            "SandboxKey": "/var/run/docker/netns/f831997cbc8b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50082"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50083"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50085"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-871500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e1a843817431186dff999677a77b90e1cb216c0695cd37f7ec217b50cc77b815",
	                    "EndpointID": "c10c5ce0711796e291e0a729879c86799e1ac37c35b14f1d57f23910383e1c22",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-871500",
	                        "47edd36e1aff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500: exit status 2 (590.0217ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                                                                 ARGS                                                                                                  │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service    │ functional-871500 service hello-node --url                                                                                                                                                            │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ start      │ -p functional-871500 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-rc.1                                                                                     │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │                     │
	│ start      │ -p functional-871500 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-rc.1                                                                                     │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │                     │
	│ start      │ -p functional-871500 --dry-run --alsologtostderr -v=1 --driver=docker --kubernetes-version=v1.35.0-rc.1                                                                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │                     │
	│ dashboard  │ --url --port 36195 -p functional-871500 --alsologtostderr -v=1                                                                                                                                        │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │                     │
	│ ssh        │ functional-871500 ssh sudo cat /etc/test/nested/copy/11304/hosts                                                                                                                                      │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ docker-env │ functional-871500 docker-env                                                                                                                                                                          │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ ssh        │ functional-871500 ssh sudo cat /etc/ssl/certs/11304.pem                                                                                                                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ ssh        │ functional-871500 ssh sudo cat /usr/share/ca-certificates/11304.pem                                                                                                                                   │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ ssh        │ functional-871500 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                                                              │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ ssh        │ functional-871500 ssh sudo cat /etc/ssl/certs/113042.pem                                                                                                                                              │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ ssh        │ functional-871500 ssh sudo cat /usr/share/ca-certificates/113042.pem                                                                                                                                  │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ ssh        │ functional-871500 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                                                              │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ cp         │ functional-871500 cp testdata\cp-test.txt /home/docker/cp-test.txt                                                                                                                                    │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ ssh        │ functional-871500 ssh -n functional-871500 sudo cat /home/docker/cp-test.txt                                                                                                                          │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ cp         │ functional-871500 cp functional-871500:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm1996276364\001\cp-test.txt │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ ssh        │ functional-871500 ssh -n functional-871500 sudo cat /home/docker/cp-test.txt                                                                                                                          │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ cp         │ functional-871500 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                                                             │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ ssh        │ functional-871500 ssh -n functional-871500 sudo cat /tmp/does/not/exist/cp-test.txt                                                                                                                   │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ license    │                                                                                                                                                                                                       │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:21 UTC │
	│ ssh        │ functional-871500 ssh echo hello                                                                                                                                                                      │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ ssh        │ functional-871500 ssh cat /etc/hostname                                                                                                                                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ tunnel     │ functional-871500 tunnel --alsologtostderr                                                                                                                                                            │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │                     │
	│ tunnel     │ functional-871500 tunnel --alsologtostderr                                                                                                                                                            │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │                     │
	│ tunnel     │ functional-871500 tunnel --alsologtostderr                                                                                                                                                            │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │                     │
	└────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:20:22
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:20:22.531843   12640 out.go:360] Setting OutFile to fd 2036 ...
	I1210 06:20:22.576720   12640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:20:22.576720   12640 out.go:374] Setting ErrFile to fd 1148...
	I1210 06:20:22.576720   12640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:20:22.591325   12640 out.go:368] Setting JSON to false
	I1210 06:20:22.594699   12640 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6554,"bootTime":1765341068,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 06:20:22.594699   12640 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 06:20:22.597972   12640 out.go:179] * [functional-871500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 06:20:22.599558   12640 notify.go:221] Checking for updates...
	I1210 06:20:22.602223   12640 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 06:20:22.605675   12640 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:20:22.607439   12640 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 06:20:22.610109   12640 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:20:22.612669   12640 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:20:22.615423   12640 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 06:20:22.616385   12640 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:20:22.730681   12640 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 06:20:22.734669   12640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:20:22.962932   12640 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-10 06:20:22.941747367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 06:20:22.967546   12640 out.go:179] * Using the docker driver based on existing profile
	I1210 06:20:22.971123   12640 start.go:309] selected driver: docker
	I1210 06:20:22.971123   12640 start.go:927] validating driver "docker" against &{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:20:22.971123   12640 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:20:22.978991   12640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:20:23.209100   12640 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-10 06:20:23.190667433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 06:20:23.243923   12640 cni.go:84] Creating CNI manager for ""
	I1210 06:20:23.243923   12640 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 06:20:23.243923   12640 start.go:353] cluster config:
	{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:20:23.247921   12640 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939273296Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939278496Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939300298Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939330401Z" level=info msg="Initializing buildkit"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.048285619Z" level=info msg="Completed buildkit initialization"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057400499Z" level=info msg="Daemon has completed initialization"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057637121Z" level=info msg="API listen on [::]:2376"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057662524Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057681026Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 06:05:47 functional-871500 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 06:05:47 functional-871500 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 06:05:47 functional-871500 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 10 06:05:47 functional-871500 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 10 06:05:47 functional-871500 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Loaded network plugin cni"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 06:05:47 functional-871500 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:21:20.961274   44958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:21:20.962288   44958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:21:20.963809   44958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:21:20.964718   44958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:21:20.966634   44958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000820] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000756] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000769] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000760] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001067] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 06:05] CPU: 0 PID: 66176 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000804] RIP: 0033:0x7faea69bcb20
	[  +0.000404] Code: Unable to access opcode bytes at RIP 0x7faea69bcaf6.
	[  +0.000646] RSP: 002b:00007ffe61c16590 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000914] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000859] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000854] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000785] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000766] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000758] FS:  0000000000000000 GS:  0000000000000000
	[  +0.894437] CPU: 10 PID: 66302 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000900] RIP: 0033:0x7fd9e8de1b20
	[  +0.000422] Code: Unable to access opcode bytes at RIP 0x7fd9e8de1af6.
	[  +0.000734] RSP: 002b:00007ffc83151e80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000839] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000834] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000828] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000825] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000826] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000826] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:21:21 up  1:49,  0 user,  load average: 0.39, 0.31, 0.41
	Linux functional-871500 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:21:18 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:21:18 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 588.
	Dec 10 06:21:18 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:21:18 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:21:18 functional-871500 kubelet[44804]: E1210 06:21:18.751852   44804 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:21:18 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:21:18 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:21:19 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 589.
	Dec 10 06:21:19 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:21:19 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:21:19 functional-871500 kubelet[44816]: E1210 06:21:19.531211   44816 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:21:19 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:21:19 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:21:20 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 590.
	Dec 10 06:21:20 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:21:20 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:21:20 functional-871500 kubelet[44844]: E1210 06:21:20.253145   44844 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:21:20 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:21:20 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:21:20 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 591.
	Dec 10 06:21:20 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:21:20 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:21:21 functional-871500 kubelet[44966]: E1210 06:21:21.014746   44966 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:21:21 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:21:21 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500: exit status 2 (591.4881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-871500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (122.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (243.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
E1210 06:19:45.911836   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
E1210 06:23:02.284349   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:50086/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500: exit status 2 (685.6092ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-871500" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-871500
helpers_test.go:244: (dbg) docker inspect functional-871500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8",
	        "Created": "2025-12-10T05:48:42.330122465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43983,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:48:42.614396787Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hosts",
	        "LogPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8-json.log",
	        "Name": "/functional-871500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-871500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-871500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-871500",
	                "Source": "/var/lib/docker/volumes/functional-871500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-871500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-871500",
	                "name.minikube.sigs.k8s.io": "functional-871500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f831997cbc8b0c45b2bf0420afea78f7ce282bcb79cb347c18a4cbe1cbe8de10",
	            "SandboxKey": "/var/run/docker/netns/f831997cbc8b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50082"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50083"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50085"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-871500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e1a843817431186dff999677a77b90e1cb216c0695cd37f7ec217b50cc77b815",
	                    "EndpointID": "c10c5ce0711796e291e0a729879c86799e1ac37c35b14f1d57f23910383e1c22",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-871500",
	                        "47edd36e1aff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500: exit status 2 (584.123ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 logs -n 25: (1.4485457s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ tunnel         │ functional-871500 tunnel --alsologtostderr                                                                                                                │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │                     │
	│ tunnel         │ functional-871500 tunnel --alsologtostderr                                                                                                                │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │                     │
	│ ssh            │ functional-871500 ssh sudo systemctl is-active crio                                                                                                       │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │                     │
	│ update-context │ functional-871500 update-context --alsologtostderr -v=2                                                                                                   │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ update-context │ functional-871500 update-context --alsologtostderr -v=2                                                                                                   │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ update-context │ functional-871500 update-context --alsologtostderr -v=2                                                                                                   │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ image          │ functional-871500 image load --daemon kicbase/echo-server:functional-871500 --alsologtostderr                                                             │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ image          │ functional-871500 image ls                                                                                                                                │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ image          │ functional-871500 image load --daemon kicbase/echo-server:functional-871500 --alsologtostderr                                                             │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ image          │ functional-871500 image ls                                                                                                                                │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ image          │ functional-871500 image load --daemon kicbase/echo-server:functional-871500 --alsologtostderr                                                             │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ image          │ functional-871500 image ls                                                                                                                                │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ image          │ functional-871500 image save kicbase/echo-server:functional-871500 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ image          │ functional-871500 image rm kicbase/echo-server:functional-871500 --alsologtostderr                                                                        │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ image          │ functional-871500 image ls                                                                                                                                │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ image          │ functional-871500 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ image          │ functional-871500 image ls                                                                                                                                │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ image          │ functional-871500 image save --daemon kicbase/echo-server:functional-871500 --alsologtostderr                                                             │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ ssh            │ functional-871500 ssh pgrep buildkitd                                                                                                                     │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │                     │
	│ image          │ functional-871500 image ls --format yaml --alsologtostderr                                                                                                │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ image          │ functional-871500 image ls --format short --alsologtostderr                                                                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ image          │ functional-871500 image ls --format json --alsologtostderr                                                                                                │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ image          │ functional-871500 image ls --format table --alsologtostderr                                                                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ image          │ functional-871500 image build -t localhost/my-image:functional-871500 testdata\build --alsologtostderr                                                    │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	│ image          │ functional-871500 image ls                                                                                                                                │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:21 UTC │ 10 Dec 25 06:21 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:20:22
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:20:22.531843   12640 out.go:360] Setting OutFile to fd 2036 ...
	I1210 06:20:22.576720   12640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:20:22.576720   12640 out.go:374] Setting ErrFile to fd 1148...
	I1210 06:20:22.576720   12640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:20:22.591325   12640 out.go:368] Setting JSON to false
	I1210 06:20:22.594699   12640 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6554,"bootTime":1765341068,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 06:20:22.594699   12640 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 06:20:22.597972   12640 out.go:179] * [functional-871500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 06:20:22.599558   12640 notify.go:221] Checking for updates...
	I1210 06:20:22.602223   12640 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 06:20:22.605675   12640 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:20:22.607439   12640 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 06:20:22.610109   12640 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:20:22.612669   12640 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:20:22.615423   12640 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 06:20:22.616385   12640 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:20:22.730681   12640 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 06:20:22.734669   12640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:20:22.962932   12640 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-10 06:20:22.941747367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 06:20:22.967546   12640 out.go:179] * Using the docker driver based on existing profile
	I1210 06:20:22.971123   12640 start.go:309] selected driver: docker
	I1210 06:20:22.971123   12640 start.go:927] validating driver "docker" against &{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:20:22.971123   12640 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:20:22.978991   12640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:20:23.209100   12640 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-10 06:20:23.190667433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 06:20:23.243923   12640 cni.go:84] Creating CNI manager for ""
	I1210 06:20:23.243923   12640 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 06:20:23.243923   12640 start.go:353] cluster config:
	{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:20:23.247921   12640 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939278496Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939300298Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939330401Z" level=info msg="Initializing buildkit"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.048285619Z" level=info msg="Completed buildkit initialization"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057400499Z" level=info msg="Daemon has completed initialization"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057637121Z" level=info msg="API listen on [::]:2376"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057662524Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057681026Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 06:05:47 functional-871500 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 06:05:47 functional-871500 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 06:05:47 functional-871500 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 10 06:05:47 functional-871500 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 10 06:05:47 functional-871500 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Loaded network plugin cni"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 06:05:47 functional-871500 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 10 06:21:40 functional-871500 dockerd[21148]: time="2025-12-10T06:21:40.646732986Z" level=info msg="sbJoin: gwep4 ''->'1644abc7be49', gwep6 ''->''"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:23:21.570888   47823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:23:21.572023   47823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:23:21.573068   47823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:23:21.574190   47823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:23:21.575311   47823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000820] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000756] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000769] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000760] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001067] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 06:05] CPU: 0 PID: 66176 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000804] RIP: 0033:0x7faea69bcb20
	[  +0.000404] Code: Unable to access opcode bytes at RIP 0x7faea69bcaf6.
	[  +0.000646] RSP: 002b:00007ffe61c16590 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000914] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000859] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000854] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000785] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000766] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000758] FS:  0000000000000000 GS:  0000000000000000
	[  +0.894437] CPU: 10 PID: 66302 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000900] RIP: 0033:0x7fd9e8de1b20
	[  +0.000422] Code: Unable to access opcode bytes at RIP 0x7fd9e8de1af6.
	[  +0.000734] RSP: 002b:00007ffc83151e80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000839] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000834] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000828] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000825] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000826] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000826] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:23:21 up  1:51,  0 user,  load average: 0.34, 0.35, 0.42
	Linux functional-871500 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:23:18 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:23:18 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 748.
	Dec 10 06:23:18 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:23:18 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:23:18 functional-871500 kubelet[47645]: E1210 06:23:18.999590   47645 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:23:19 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:23:19 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:23:19 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 749.
	Dec 10 06:23:19 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:23:19 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:23:19 functional-871500 kubelet[47672]: E1210 06:23:19.759861   47672 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:23:19 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:23:19 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:23:20 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 750.
	Dec 10 06:23:20 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:23:20 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:23:20 functional-871500 kubelet[47701]: E1210 06:23:20.499902   47701 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:23:20 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:23:20 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:23:21 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 751.
	Dec 10 06:23:21 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:23:21 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:23:21 functional-871500 kubelet[47798]: E1210 06:23:21.239102   47798 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:23:21 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:23:21 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500: exit status 2 (597.7868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-871500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (243.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (22.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-871500 replace --force -f testdata\mysql.yaml
functional_test.go:1798: (dbg) Non-zero exit: kubectl --context functional-871500 replace --force -f testdata\mysql.yaml: exit status 1 (20.2298107s)

                                                
                                                
** stderr ** 
	E1210 06:20:39.606744   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:20:49.696670   14328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	unable to recognize "testdata\\mysql.yaml": Get "https://127.0.0.1:50086/api?timeout=32s": EOF
	unable to recognize "testdata\\mysql.yaml": Get "https://127.0.0.1:50086/api?timeout=32s": EOF

                                                
                                                
** /stderr **
functional_test.go:1800: failed to kubectl replace mysql: args "kubectl --context functional-871500 replace --force -f testdata\\mysql.yaml" failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-871500
helpers_test.go:244: (dbg) docker inspect functional-871500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8",
	        "Created": "2025-12-10T05:48:42.330122465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43983,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:48:42.614396787Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hosts",
	        "LogPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8-json.log",
	        "Name": "/functional-871500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-871500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-871500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-871500",
	                "Source": "/var/lib/docker/volumes/functional-871500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-871500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-871500",
	                "name.minikube.sigs.k8s.io": "functional-871500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f831997cbc8b0c45b2bf0420afea78f7ce282bcb79cb347c18a4cbe1cbe8de10",
	            "SandboxKey": "/var/run/docker/netns/f831997cbc8b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50082"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50083"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50085"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-871500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e1a843817431186dff999677a77b90e1cb216c0695cd37f7ec217b50cc77b815",
	                    "EndpointID": "c10c5ce0711796e291e0a729879c86799e1ac37c35b14f1d57f23910383e1c22",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-871500",
	                        "47edd36e1aff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500: exit status 2 (606.2712ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 logs -n 25: (1.0145219s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                       ARGS                                                        │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache      │ functional-871500 cache reload                                                                                    │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh        │ functional-871500 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                           │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache      │ delete registry.k8s.io/pause:3.1                                                                                  │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache      │ delete registry.k8s.io/pause:latest                                                                               │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ kubectl    │ functional-871500 kubectl -- --context functional-871500 get pods                                                 │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │                     │
	│ start      │ -p functional-871500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all          │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:05 UTC │                     │
	│ addons     │ functional-871500 addons list                                                                                     │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ config     │ functional-871500 config unset cpus                                                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ config     │ functional-871500 config get cpus                                                                                 │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ addons     │ functional-871500 addons list -o json                                                                             │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ config     │ functional-871500 config set cpus 2                                                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ config     │ functional-871500 config get cpus                                                                                 │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ config     │ functional-871500 config unset cpus                                                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ config     │ functional-871500 config get cpus                                                                                 │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ service    │ functional-871500 service list                                                                                    │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ service    │ functional-871500 service list -o json                                                                            │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ service    │ functional-871500 service --namespace=default --https --url hello-node                                            │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ service    │ functional-871500 service hello-node --url --format={{.IP}}                                                       │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ service    │ functional-871500 service hello-node --url                                                                        │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ start      │ -p functional-871500 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-rc.1 │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │                     │
	│ start      │ -p functional-871500 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-rc.1 │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │                     │
	│ start      │ -p functional-871500 --dry-run --alsologtostderr -v=1 --driver=docker --kubernetes-version=v1.35.0-rc.1           │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │                     │
	│ dashboard  │ --url --port 36195 -p functional-871500 --alsologtostderr -v=1                                                    │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │                     │
	│ ssh        │ functional-871500 ssh sudo cat /etc/test/nested/copy/11304/hosts                                                  │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	│ docker-env │ functional-871500 docker-env                                                                                      │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:20 UTC │ 10 Dec 25 06:20 UTC │
	└────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:20:22
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:20:22.531843   12640 out.go:360] Setting OutFile to fd 2036 ...
	I1210 06:20:22.576720   12640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:20:22.576720   12640 out.go:374] Setting ErrFile to fd 1148...
	I1210 06:20:22.576720   12640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:20:22.591325   12640 out.go:368] Setting JSON to false
	I1210 06:20:22.594699   12640 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6554,"bootTime":1765341068,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 06:20:22.594699   12640 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 06:20:22.597972   12640 out.go:179] * [functional-871500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 06:20:22.599558   12640 notify.go:221] Checking for updates...
	I1210 06:20:22.602223   12640 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 06:20:22.605675   12640 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:20:22.607439   12640 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 06:20:22.610109   12640 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:20:22.612669   12640 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:20:22.615423   12640 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 06:20:22.616385   12640 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:20:22.730681   12640 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 06:20:22.734669   12640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:20:22.962932   12640 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-10 06:20:22.941747367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 06:20:22.967546   12640 out.go:179] * Using the docker driver based on existing profile
	I1210 06:20:22.971123   12640 start.go:309] selected driver: docker
	I1210 06:20:22.971123   12640 start.go:927] validating driver "docker" against &{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:20:22.971123   12640 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:20:22.978991   12640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:20:23.209100   12640 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-10 06:20:23.190667433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 06:20:23.243923   12640 cni.go:84] Creating CNI manager for ""
	I1210 06:20:23.243923   12640 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 06:20:23.243923   12640 start.go:353] cluster config:
	{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:20:23.247921   12640 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939273296Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939278496Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939300298Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939330401Z" level=info msg="Initializing buildkit"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.048285619Z" level=info msg="Completed buildkit initialization"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057400499Z" level=info msg="Daemon has completed initialization"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057637121Z" level=info msg="API listen on [::]:2376"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057662524Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057681026Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 06:05:47 functional-871500 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 06:05:47 functional-871500 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 06:05:47 functional-871500 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 10 06:05:47 functional-871500 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 10 06:05:47 functional-871500 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Loaded network plugin cni"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 06:05:47 functional-871500 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:20:51.263906   44161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:20:51.264936   44161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:20:51.265907   44161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:20:51.268393   44161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:20:51.269509   44161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000820] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000756] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000769] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000760] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001067] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 06:05] CPU: 0 PID: 66176 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000804] RIP: 0033:0x7faea69bcb20
	[  +0.000404] Code: Unable to access opcode bytes at RIP 0x7faea69bcaf6.
	[  +0.000646] RSP: 002b:00007ffe61c16590 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000914] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000859] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000854] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000785] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000766] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000758] FS:  0000000000000000 GS:  0000000000000000
	[  +0.894437] CPU: 10 PID: 66302 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000900] RIP: 0033:0x7fd9e8de1b20
	[  +0.000422] Code: Unable to access opcode bytes at RIP 0x7fd9e8de1af6.
	[  +0.000734] RSP: 002b:00007ffc83151e80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000839] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000834] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000828] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000825] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000826] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000826] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:20:51 up  1:49,  0 user,  load average: 0.29, 0.28, 0.41
	Linux functional-871500 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:20:48 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:20:48 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 548.
	Dec 10 06:20:48 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:48 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:48 functional-871500 kubelet[44005]: E1210 06:20:48.759369   44005 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:20:48 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:20:48 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:20:49 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 549.
	Dec 10 06:20:49 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:49 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:49 functional-871500 kubelet[44016]: E1210 06:20:49.503158   44016 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:20:49 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:20:49 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:20:50 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 550.
	Dec 10 06:20:50 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:50 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:50 functional-871500 kubelet[44036]: E1210 06:20:50.267004   44036 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:20:50 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:20:50 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:20:50 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 551.
	Dec 10 06:20:50 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:50 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:51 functional-871500 kubelet[44084]: E1210 06:20:51.012353   44084 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:20:51 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:20:51 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500: exit status 2 (591.0071ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-871500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (22.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (54.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-871500 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-871500 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (50.351383s)

                                                
                                                
** stderr ** 
	E1210 06:19:29.031454    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:39.122743    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:49.166290    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:59.206618    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:20:09.245652    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-871500 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	E1210 06:19:29.031454    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:39.122743    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:49.166290    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:59.206618    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:20:09.245652    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	E1210 06:19:29.031454    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:39.122743    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:49.166290    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:59.206618    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:20:09.245652    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	E1210 06:19:29.031454    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:39.122743    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:49.166290    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:59.206618    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:20:09.245652    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	E1210 06:19:29.031454    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:39.122743    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:49.166290    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:59.206618    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:20:09.245652    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	E1210 06:19:29.031454    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:39.122743    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:49.166290    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:19:59.206618    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	E1210 06:20:09.245652    5844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:50086/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-871500
helpers_test.go:244: (dbg) docker inspect functional-871500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8",
	        "Created": "2025-12-10T05:48:42.330122465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43983,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T05:48:42.614396787Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/hosts",
	        "LogPath": "/var/lib/docker/containers/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8/47edd36e1affb4940f4c15686db6c19bb88a053777bafb09385966f28e4e30b8-json.log",
	        "Name": "/functional-871500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-871500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-871500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fac1874d9551cc1c2d51340924baeb1d37a6f69a6a7ac01672e4ae7e7659737e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-871500",
	                "Source": "/var/lib/docker/volumes/functional-871500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-871500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-871500",
	                "name.minikube.sigs.k8s.io": "functional-871500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f831997cbc8b0c45b2bf0420afea78f7ce282bcb79cb347c18a4cbe1cbe8de10",
	            "SandboxKey": "/var/run/docker/netns/f831997cbc8b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50082"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50083"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50085"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-871500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e1a843817431186dff999677a77b90e1cb216c0695cd37f7ec217b50cc77b815",
	                    "EndpointID": "c10c5ce0711796e291e0a729879c86799e1ac37c35b14f1d57f23910383e1c22",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-871500",
	                        "47edd36e1aff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-871500 -n functional-871500: exit status 2 (659.6699ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 logs -n 25: (1.6644594s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-871500 cache delete minikube-local-cache-test:functional-871500                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                         │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ list                                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo crictl images                                                                 │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo docker rmi registry.k8s.io/pause:latest                                       │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │                     │
	│ cache   │ functional-871500 cache reload                                                                           │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ ssh     │ functional-871500 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                         │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                      │ minikube          │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │ 10 Dec 25 06:04 UTC │
	│ kubectl │ functional-871500 kubectl -- --context functional-871500 get pods                                        │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:04 UTC │                     │
	│ start   │ -p functional-871500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:05 UTC │                     │
	│ addons  │ functional-871500 addons list                                                                            │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ config  │ functional-871500 config unset cpus                                                                      │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ config  │ functional-871500 config get cpus                                                                        │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ addons  │ functional-871500 addons list -o json                                                                    │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ config  │ functional-871500 config set cpus 2                                                                      │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ config  │ functional-871500 config get cpus                                                                        │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ config  │ functional-871500 config unset cpus                                                                      │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │ 10 Dec 25 06:19 UTC │
	│ config  │ functional-871500 config get cpus                                                                        │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ service │ functional-871500 service list                                                                           │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ service │ functional-871500 service list -o json                                                                   │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ service │ functional-871500 service --namespace=default --https --url hello-node                                   │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ service │ functional-871500 service hello-node --url --format={{.IP}}                                              │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	│ service │ functional-871500 service hello-node --url                                                               │ functional-871500 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 06:19 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 06:05:40
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 06:05:40.939558    4268 out.go:360] Setting OutFile to fd 1136 ...
	I1210 06:05:40.981558    4268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:05:40.981558    4268 out.go:374] Setting ErrFile to fd 1864...
	I1210 06:05:40.981558    4268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:05:40.994563    4268 out.go:368] Setting JSON to false
	I1210 06:05:40.997553    4268 start.go:133] hostinfo: {"hostname":"minikube4","uptime":5672,"bootTime":1765341068,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 06:05:40.997553    4268 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 06:05:41.001553    4268 out.go:179] * [functional-871500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 06:05:41.004553    4268 notify.go:221] Checking for updates...
	I1210 06:05:41.007553    4268 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 06:05:41.009554    4268 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:05:41.013554    4268 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 06:05:41.018172    4268 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:05:41.020466    4268 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:05:41.023475    4268 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 06:05:41.023475    4268 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:05:41.199301    4268 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 06:05:41.203110    4268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:05:41.444620    4268 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-10 06:05:41.42593568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 06:05:41.449620    4268 out.go:179] * Using the docker driver based on existing profile
	I1210 06:05:41.451493    4268 start.go:309] selected driver: docker
	I1210 06:05:41.451493    4268 start.go:927] validating driver "docker" against &{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:05:41.451493    4268 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:05:41.457890    4268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:05:41.686631    4268 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-10 06:05:41.6698388 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Index
ServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Ex
pected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescrip
tion:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Program
Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 06:05:41.735496    4268 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 06:05:41.735496    4268 cni.go:84] Creating CNI manager for ""
	I1210 06:05:41.735496    4268 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 06:05:41.735496    4268 start.go:353] cluster config:
	{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:05:41.741018    4268 out.go:179] * Starting "functional-871500" primary control-plane node in "functional-871500" cluster
	I1210 06:05:41.744259    4268 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 06:05:41.749232    4268 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 06:05:41.752040    4268 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 06:05:41.752173    4268 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 06:05:41.752173    4268 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1210 06:05:41.752173    4268 cache.go:65] Caching tarball of preloaded images
	I1210 06:05:41.752485    4268 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1210 06:05:41.752621    4268 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1210 06:05:41.752768    4268 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\config.json ...
	I1210 06:05:41.832812    4268 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 06:05:41.832812    4268 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 06:05:41.832812    4268 cache.go:243] Successfully downloaded all kic artifacts
	I1210 06:05:41.832812    4268 start.go:360] acquireMachinesLock for functional-871500: {Name:mkaa7072cf669cebcb93feb3e66bf80897472d33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 06:05:41.832812    4268 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-871500"
	I1210 06:05:41.832812    4268 start.go:96] Skipping create...Using existing machine configuration
	I1210 06:05:41.832812    4268 fix.go:54] fixHost starting: 
	I1210 06:05:41.839306    4268 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
	I1210 06:05:41.895279    4268 fix.go:112] recreateIfNeeded on functional-871500: state=Running err=<nil>
	W1210 06:05:41.895279    4268 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 06:05:41.898650    4268 out.go:252] * Updating the running docker "functional-871500" container ...
	I1210 06:05:41.898650    4268 machine.go:94] provisionDockerMachine start ...
	I1210 06:05:41.901828    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:41.956991    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:41.957565    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:41.957565    4268 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 06:05:42.140179    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-871500
	
	I1210 06:05:42.140179    4268 ubuntu.go:182] provisioning hostname "functional-871500"
	I1210 06:05:42.144876    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:42.200094    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:42.200718    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:42.200718    4268 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-871500 && echo "functional-871500" | sudo tee /etc/hostname
	I1210 06:05:42.397029    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-871500
	
	I1210 06:05:42.400561    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:42.454568    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:42.455568    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:42.455568    4268 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-871500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-871500/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-871500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 06:05:42.650836    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:05:42.650836    4268 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 06:05:42.650836    4268 ubuntu.go:190] setting up certificates
	I1210 06:05:42.650836    4268 provision.go:84] configureAuth start
	I1210 06:05:42.655100    4268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 06:05:42.713113    4268 provision.go:143] copyHostCerts
	I1210 06:05:42.713113    4268 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 06:05:42.713113    4268 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 06:05:42.713113    4268 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 06:05:42.714114    4268 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 06:05:42.714114    4268 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 06:05:42.714114    4268 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 06:05:42.715113    4268 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 06:05:42.715113    4268 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 06:05:42.715113    4268 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 06:05:42.716114    4268 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-871500 san=[127.0.0.1 192.168.49.2 functional-871500 localhost minikube]
	I1210 06:05:42.798580    4268 provision.go:177] copyRemoteCerts
	I1210 06:05:42.802588    4268 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 06:05:42.805578    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:42.862278    4268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 06:05:42.996859    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 06:05:43.030822    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 06:05:43.062798    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 06:05:43.094379    4268 provision.go:87] duration metric: took 443.5373ms to configureAuth
	I1210 06:05:43.094426    4268 ubuntu.go:206] setting minikube options for container-runtime
	I1210 06:05:43.094529    4268 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 06:05:43.098320    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:43.157455    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:43.158049    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:43.158049    4268 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 06:05:43.340189    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 06:05:43.340189    4268 ubuntu.go:71] root file system type: overlay
	I1210 06:05:43.340189    4268 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 06:05:43.343620    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:43.397863    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:43.398871    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:43.398902    4268 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 06:05:43.595156    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 06:05:43.598799    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:43.653593    4268 main.go:143] libmachine: Using SSH client type: native
	I1210 06:05:43.654604    4268 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 50082 <nil> <nil>}
	I1210 06:05:43.654630    4268 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 06:05:43.838408    4268 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 06:05:43.838408    4268 machine.go:97] duration metric: took 1.939733s to provisionDockerMachine
	I1210 06:05:43.838408    4268 start.go:293] postStartSetup for "functional-871500" (driver="docker")
	I1210 06:05:43.838408    4268 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 06:05:43.843330    4268 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 06:05:43.846525    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:43.900024    4268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 06:05:44.029680    4268 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 06:05:44.037541    4268 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 06:05:44.037541    4268 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 06:05:44.037541    4268 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 06:05:44.038189    4268 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 06:05:44.038189    4268 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 06:05:44.038757    4268 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts -> hosts in /etc/test/nested/copy/11304
	I1210 06:05:44.043153    4268 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11304
	I1210 06:05:44.055384    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 06:05:44.088733    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts --> /etc/test/nested/copy/11304/hosts (40 bytes)
	I1210 06:05:44.119280    4268 start.go:296] duration metric: took 280.8687ms for postStartSetup
	I1210 06:05:44.124009    4268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:05:44.126784    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:44.182044    4268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 06:05:44.316788    4268 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 06:05:44.324843    4268 fix.go:56] duration metric: took 2.4919994s for fixHost
	I1210 06:05:44.324843    4268 start.go:83] releasing machines lock for "functional-871500", held for 2.4919994s
	I1210 06:05:44.328923    4268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-871500
	I1210 06:05:44.381793    4268 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 06:05:44.385677    4268 ssh_runner.go:195] Run: cat /version.json
	I1210 06:05:44.386221    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:44.389012    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:44.441429    4268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	I1210 06:05:44.442469    4268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
	W1210 06:05:44.560137    4268 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 06:05:44.563959    4268 ssh_runner.go:195] Run: systemctl --version
	I1210 06:05:44.577858    4268 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 06:05:44.589693    4268 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 06:05:44.594579    4268 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 06:05:44.610144    4268 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 06:05:44.610144    4268 start.go:496] detecting cgroup driver to use...
	I1210 06:05:44.610144    4268 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:05:44.610144    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:05:44.637889    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 06:05:44.661390    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 06:05:44.675857    4268 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 06:05:44.679682    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1210 06:05:44.688700    4268 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 06:05:44.688700    4268 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 06:05:44.703844    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:05:44.722937    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 06:05:44.745466    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 06:05:44.764651    4268 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 06:05:44.786058    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 06:05:44.803943    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 06:05:44.825767    4268 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 06:05:44.844801    4268 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 06:05:44.865558    4268 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 06:05:44.882679    4268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:05:45.109626    4268 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 06:05:45.372410    4268 start.go:496] detecting cgroup driver to use...
	I1210 06:05:45.372488    4268 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 06:05:45.376725    4268 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 06:05:45.404975    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:05:45.427035    4268 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 06:05:45.453802    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 06:05:45.475732    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 06:05:45.493918    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 06:05:45.524028    4268 ssh_runner.go:195] Run: which cri-dockerd
	I1210 06:05:45.535197    4268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 06:05:45.548646    4268 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 06:05:45.572635    4268 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 06:05:45.724104    4268 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 06:05:45.868966    4268 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 06:05:45.869084    4268 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 06:05:45.901140    4268 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 06:05:45.921606    4268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:05:46.074547    4268 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 06:05:47.064088    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 06:05:47.086611    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 06:05:47.108595    4268 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1210 06:05:47.134813    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 06:05:47.157362    4268 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 06:05:47.294625    4268 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 06:05:47.445441    4268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:05:47.584076    4268 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 06:05:47.608696    4268 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 06:05:47.631875    4268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:05:47.796110    4268 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 06:05:47.918397    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 06:05:47.936744    4268 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 06:05:47.940567    4268 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 06:05:47.948674    4268 start.go:564] Will wait 60s for crictl version
	I1210 06:05:47.953390    4268 ssh_runner.go:195] Run: which crictl
	I1210 06:05:47.964351    4268 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 06:05:48.010041    4268 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 06:05:48.014800    4268 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 06:05:48.056120    4268 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 06:05:48.095316    4268 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.2 ...
	I1210 06:05:48.098689    4268 cli_runner.go:164] Run: docker exec -t functional-871500 dig +short host.docker.internal
	I1210 06:05:48.299568    4268 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 06:05:48.303921    4268 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 06:05:48.317690    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:48.374840    4268 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 06:05:48.377516    4268 kubeadm.go:884] updating cluster {Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 06:05:48.377840    4268 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 06:05:48.382038    4268 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 06:05:48.417200    4268 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-871500
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1210 06:05:48.417200    4268 docker.go:621] Images already preloaded, skipping extraction
	I1210 06:05:48.421745    4268 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 06:05:48.451984    4268 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-871500
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1210 06:05:48.451984    4268 cache_images.go:86] Images are preloaded, skipping loading
	I1210 06:05:48.451984    4268 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 docker true true} ...
	I1210 06:05:48.451984    4268 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-871500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 06:05:48.455620    4268 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 06:05:48.856277    4268 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 06:05:48.856277    4268 cni.go:84] Creating CNI manager for ""
	I1210 06:05:48.856277    4268 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 06:05:48.856353    4268 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 06:05:48.856353    4268 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-871500 NodeName:functional-871500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 06:05:48.856531    4268 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-871500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 06:05:48.860333    4268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 06:05:48.875980    4268 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 06:05:48.881099    4268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 06:05:48.893740    4268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1210 06:05:48.914721    4268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 06:05:48.934821    4268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2073 bytes)
	I1210 06:05:48.960316    4268 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 06:05:48.972694    4268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 06:05:49.123118    4268 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 06:05:49.255861    4268 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500 for IP: 192.168.49.2
	I1210 06:05:49.255861    4268 certs.go:195] generating shared ca certs ...
	I1210 06:05:49.255861    4268 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 06:05:49.256902    4268 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 06:05:49.257201    4268 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 06:05:49.257329    4268 certs.go:257] generating profile certs ...
	I1210 06:05:49.257955    4268 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\client.key
	I1210 06:05:49.257982    4268 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key.53a949a1
	I1210 06:05:49.257982    4268 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key
	I1210 06:05:49.259233    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 06:05:49.259785    4268 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 06:05:49.259886    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 06:05:49.260142    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 06:05:49.260323    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 06:05:49.260584    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 06:05:49.260858    4268 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 06:05:49.261989    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 06:05:49.291586    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 06:05:49.322755    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 06:05:49.365403    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 06:05:49.393221    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 06:05:49.422952    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 06:05:49.452108    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 06:05:49.481059    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-871500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 06:05:49.509597    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 06:05:49.540303    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 06:05:49.570456    4268 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 06:05:49.600563    4268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 06:05:49.625982    4268 ssh_runner.go:195] Run: openssl version
	I1210 06:05:49.646811    4268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 06:05:49.665986    4268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 06:05:49.688481    4268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 06:05:49.697316    4268 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 06:05:49.701997    4268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 06:05:49.756268    4268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 06:05:49.774475    4268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 06:05:49.792936    4268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 06:05:49.812585    4268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 06:05:49.820754    4268 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 06:05:49.824743    4268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 06:05:49.871530    4268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 06:05:49.889957    4268 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:05:49.909516    4268 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 06:05:49.930952    4268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:05:49.939674    4268 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:05:49.944280    4268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 06:05:49.991244    4268 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 06:05:50.007593    4268 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 06:05:50.020119    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 06:05:50.067344    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 06:05:50.116460    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 06:05:50.165520    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 06:05:50.215057    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 06:05:50.263721    4268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 06:05:50.308021    4268 kubeadm.go:401] StartCluster: {Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:05:50.311614    4268 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 06:05:50.346733    4268 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 06:05:50.360552    4268 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 06:05:50.360580    4268 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 06:05:50.364548    4268 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 06:05:50.378578    4268 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:05:50.383414    4268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
	I1210 06:05:50.435757    4268 kubeconfig.go:125] found "functional-871500" server: "https://127.0.0.1:50086"
	I1210 06:05:50.443021    4268 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 06:05:50.458083    4268 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 05:49:09.404233938 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 06:05:48.941571180 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 06:05:50.458083    4268 kubeadm.go:1161] stopping kube-system containers ...
	I1210 06:05:50.462114    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 06:05:50.496795    4268 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 06:05:50.522144    4268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:05:50.536445    4268 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 10 05:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 10 05:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 05:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 10 05:53 /etc/kubernetes/scheduler.conf
	
	I1210 06:05:50.540786    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:05:50.560978    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:05:50.573948    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:05:50.578606    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:05:50.598347    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:05:50.624166    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:05:50.628272    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:05:50.646130    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:05:50.660886    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 06:05:50.664931    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:05:50.683408    4268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:05:50.706370    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:05:50.943551    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:05:51.490493    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:05:51.736715    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:05:51.807636    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 06:05:51.910188    4268 api_server.go:52] waiting for apiserver process to appear ...
	I1210 06:05:51.914776    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:52.416327    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:52.915603    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:53.415591    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:53.915503    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:54.417765    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:54.915417    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:55.415417    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:55.915755    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:56.416253    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:56.915455    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:57.415861    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:57.915608    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:58.414964    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:58.916008    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:59.416023    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:05:59.916693    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:00.415637    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:00.915380    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:01.415701    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:01.915624    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:02.415007    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:02.915306    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:03.416586    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:03.916409    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:04.415626    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:04.916918    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:05.415662    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:05.915410    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:06.415782    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:06.915788    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:07.415237    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:07.915596    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:08.415151    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:08.915783    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:09.415452    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:09.915630    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:10.416137    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:10.915739    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:11.416340    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:11.916010    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:12.415711    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:12.915617    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:13.415590    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:13.916131    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:14.415833    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:14.915810    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:15.415434    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:15.916011    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:16.415715    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:16.916214    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:17.416569    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:17.915928    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:18.415760    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:18.915854    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:19.416023    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:19.915707    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:20.416022    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:20.915512    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:21.415449    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:21.915862    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:22.416187    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:22.915711    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:23.415407    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:23.916748    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:24.416067    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:24.915622    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:25.416460    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:25.916776    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:26.416986    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:26.915804    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:27.415924    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:27.915868    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:28.416289    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:28.915816    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:29.416455    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:29.916444    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:30.416956    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:30.917223    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:31.416570    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:31.916710    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:32.415252    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:32.916148    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:33.415760    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:33.915822    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:34.416279    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:34.915815    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:35.416215    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:35.916205    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:36.416507    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:36.915722    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:37.415763    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:37.915757    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:38.415942    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:38.915700    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:39.416506    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:39.915713    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:40.416558    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:40.916458    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:41.416738    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:41.916360    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:42.416858    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:42.916503    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:43.416468    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:43.915432    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:44.416286    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:44.915769    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:45.416376    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:45.916158    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:46.416260    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:46.916747    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:47.416302    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:47.915950    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:48.416456    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:48.916313    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:49.416114    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:49.916313    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:50.417029    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:50.916444    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:51.416929    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:51.915349    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:06:51.946488    4268 logs.go:282] 0 containers: []
	W1210 06:06:51.946488    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:06:51.950223    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:06:51.978835    4268 logs.go:282] 0 containers: []
	W1210 06:06:51.978835    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:06:51.982107    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:06:52.014720    4268 logs.go:282] 0 containers: []
	W1210 06:06:52.014720    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:06:52.018659    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:06:52.049849    4268 logs.go:282] 0 containers: []
	W1210 06:06:52.049849    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:06:52.053813    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:06:52.081237    4268 logs.go:282] 0 containers: []
	W1210 06:06:52.081237    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:06:52.085458    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:06:52.112058    4268 logs.go:282] 0 containers: []
	W1210 06:06:52.112058    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:06:52.115659    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:06:52.145147    4268 logs.go:282] 0 containers: []
	W1210 06:06:52.145147    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:06:52.145147    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:06:52.145147    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:06:52.208920    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:06:52.208920    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:06:52.238472    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:06:52.238472    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:06:52.325434    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:06:52.315654   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.316655   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.317934   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.318711   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.321223   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:06:52.315654   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.316655   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.317934   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.318711   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:52.321223   23201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:06:52.325434    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:06:52.325434    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:06:52.371108    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:06:52.371108    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:06:54.948530    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:54.972933    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:06:55.001036    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.001036    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:06:55.004290    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:06:55.032943    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.033029    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:06:55.036668    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:06:55.063474    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.063474    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:06:55.066822    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:06:55.095034    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.095034    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:06:55.098842    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:06:55.125575    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.125575    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:06:55.128696    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:06:55.158053    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.158053    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:06:55.161225    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:06:55.188975    4268 logs.go:282] 0 containers: []
	W1210 06:06:55.188975    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:06:55.188975    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:06:55.188975    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:06:55.248739    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:06:55.248739    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:06:55.280459    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:06:55.280994    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:06:55.367741    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:06:55.357007   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.358211   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.360797   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.361943   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.363117   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:06:55.357007   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.358211   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.360797   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.361943   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:55.363117   23359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:06:55.367741    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:06:55.367741    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:06:55.414124    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:06:55.414124    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:06:57.973920    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:06:57.999748    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:06:58.030430    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.030430    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:06:58.034282    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:06:58.061116    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.061116    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:06:58.064723    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:06:58.091888    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.091888    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:06:58.095665    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:06:58.123935    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.123935    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:06:58.127445    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:06:58.154330    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.154330    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:06:58.157668    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:06:58.184825    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.184842    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:06:58.188704    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:06:58.215563    4268 logs.go:282] 0 containers: []
	W1210 06:06:58.215563    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:06:58.215563    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:06:58.215563    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:06:58.279351    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:06:58.279351    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:06:58.309783    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:06:58.309783    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:06:58.393286    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:06:58.382107   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.383660   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.385217   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.386504   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.387262   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:06:58.382107   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.383660   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.385217   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.386504   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:06:58.387262   23510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:06:58.393286    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:06:58.393286    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:06:58.439058    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:06:58.439058    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:00.997523    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:01.021828    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:01.053542    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.053618    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:01.056677    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:01.085032    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.085032    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:01.088780    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:01.117302    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.117302    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:01.120752    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:01.148911    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.148911    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:01.152164    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:01.180119    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.180119    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:01.183696    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:01.213108    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.213108    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:01.216996    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:01.243946    4268 logs.go:282] 0 containers: []
	W1210 06:07:01.243946    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:01.243946    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:01.243946    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:01.326430    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:01.314277   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.315265   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.319210   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.320225   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.321052   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:01.314277   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.315265   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.319210   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.320225   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:01.321052   23651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:01.326430    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:01.326459    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:01.370668    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:01.370668    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:01.422598    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:01.422598    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:01.484373    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:01.484373    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:04.021695    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:04.044749    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:04.073749    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.073749    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:04.077613    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:04.108271    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.108271    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:04.111712    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:04.140635    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.140635    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:04.143876    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:04.172340    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.172340    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:04.176392    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:04.202586    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.202586    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:04.207209    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:04.235404    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.235404    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:04.238669    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:04.269296    4268 logs.go:282] 0 containers: []
	W1210 06:07:04.269296    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:04.269296    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:04.269296    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:04.333843    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:04.333843    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:04.363955    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:04.363955    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:04.444558    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:04.436237   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.437185   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.438566   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.439650   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.440909   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:04.436237   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.437185   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.438566   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.439650   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:04.440909   23806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:04.444558    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:04.445092    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:04.491255    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:04.491387    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:07.052134    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:07.075975    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:07.105912    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.105948    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:07.109453    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:07.138043    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.138043    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:07.141960    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:07.168363    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.168363    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:07.172168    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:07.199814    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.199814    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:07.204084    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:07.233711    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.233711    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:07.236936    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:07.264933    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.264933    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:07.268534    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:07.295981    4268 logs.go:282] 0 containers: []
	W1210 06:07:07.295981    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:07.295981    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:07.295981    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:07.344067    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:07.344067    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:07.405677    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:07.405677    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:07.435735    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:07.435735    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:07.519926    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:07.510232   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.511256   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.513848   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.515885   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.517364   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:07.510232   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.511256   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.513848   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.515885   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:07.517364   23967 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:07.519926    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:07.519926    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:10.070185    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:10.092250    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:10.122601    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.122601    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:10.128232    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:10.158544    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.158544    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:10.162689    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:10.190392    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.190392    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:10.194663    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:10.222107    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.222107    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:10.226125    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:10.252783    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.252783    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:10.256304    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:10.283397    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.283397    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:10.287203    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:10.315917    4268 logs.go:282] 0 containers: []
	W1210 06:07:10.315961    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:10.315961    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:10.315997    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:10.379613    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:10.379613    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:10.413908    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:10.413937    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:10.494940    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:10.485289   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.486129   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.488300   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.489233   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.492215   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:10.485289   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.486129   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.488300   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.489233   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:10.492215   24101 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:10.494940    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:10.494940    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:10.539292    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:10.539292    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:13.096499    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:13.120311    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:13.151343    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.151343    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:13.156101    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:13.187337    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.187337    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:13.190270    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:13.219411    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.219439    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:13.222798    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:13.249771    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.249771    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:13.253831    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:13.281375    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.281375    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:13.285787    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:13.313732    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.313732    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:13.317446    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:13.345700    4268 logs.go:282] 0 containers: []
	W1210 06:07:13.345700    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:13.345700    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:13.345745    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:13.390315    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:13.390315    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:13.448999    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:13.448999    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:13.479056    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:13.479056    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:13.560071    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:13.549957   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.551004   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.553955   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.555549   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.557226   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:13.549957   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.551004   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.553955   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.555549   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:13.557226   24263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:13.560113    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:13.560113    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:16.115604    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:16.139172    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:16.166471    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.166471    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:16.169908    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:16.197926    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.197926    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:16.201554    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:16.228895    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.228895    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:16.233644    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:16.261634    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.261634    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:16.265293    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:16.290403    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.290403    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:16.294262    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:16.322219    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.322219    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:16.326037    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:16.354206    4268 logs.go:282] 0 containers: []
	W1210 06:07:16.354206    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:16.354206    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:16.354206    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:16.419895    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:16.419895    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:16.451758    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:16.451758    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:16.530533    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:16.520075   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.522508   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.523655   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.525647   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.527182   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:16.520075   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.522508   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.523655   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.525647   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:16.527182   24404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:16.530563    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:16.530563    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:16.577832    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:16.577832    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:19.135824    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:19.161092    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:19.193445    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.193445    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:19.196612    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:19.224210    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.224263    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:19.227196    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:19.255555    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.255555    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:19.259039    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:19.288567    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.288567    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:19.292040    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:19.320589    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.320589    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:19.324658    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:19.351319    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.351319    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:19.355558    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:19.381847    4268 logs.go:282] 0 containers: []
	W1210 06:07:19.381847    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:19.381847    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:19.381847    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:19.449609    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:19.449609    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:19.481141    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:19.481141    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:19.571805    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:19.560658   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.564410   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.566480   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.567250   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.569393   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:19.560658   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.564410   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.566480   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.567250   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:19.569393   24554 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:19.571876    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:19.571876    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:19.618670    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:19.618670    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:22.172007    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:22.194631    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:22.223852    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.223852    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:22.227213    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:22.259065    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.259065    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:22.262548    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:22.294541    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.294541    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:22.297904    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:22.326231    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.326231    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:22.330450    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:22.355798    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.355798    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:22.359259    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:22.387519    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.387519    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:22.391049    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:22.418109    4268 logs.go:282] 0 containers: []
	W1210 06:07:22.418109    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:22.418109    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:22.418109    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:22.499328    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:22.489790   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.490896   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.491903   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.494536   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.495501   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:22.489790   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.490896   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.491903   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.494536   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:22.495501   24702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:22.499328    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:22.499328    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:22.543726    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:22.543726    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:22.597115    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:22.597115    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:22.659436    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:22.659436    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:25.192803    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:25.217242    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:25.244925    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.244925    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:25.251081    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:25.278953    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.278953    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:25.282665    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:25.309347    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.309347    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:25.313377    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:25.341665    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.341665    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:25.345141    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:25.371901    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.371901    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:25.375742    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:25.403341    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.403365    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:25.406946    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:25.437008    4268 logs.go:282] 0 containers: []
	W1210 06:07:25.437008    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:25.437008    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:25.437008    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:25.488060    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:25.488060    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:25.551490    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:25.551490    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:25.582172    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:25.582172    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:25.657523    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:25.647353   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.648357   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.649373   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.651014   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.652003   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:25.647353   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.648357   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.649373   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.651014   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:25.652003   24886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:25.657523    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:25.657523    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:28.209929    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:28.232843    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:28.261372    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.261372    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:28.265040    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:28.292477    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.292505    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:28.296009    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:28.320486    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.320486    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:28.324280    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:28.351296    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.351296    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:28.355074    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:28.390195    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.390195    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:28.394179    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:28.421613    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.421613    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:28.425545    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:28.453777    4268 logs.go:282] 0 containers: []
	W1210 06:07:28.453777    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:28.453777    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:28.453777    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:28.499488    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:28.499488    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:28.561776    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:28.561776    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:28.593067    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:28.593112    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:28.668150    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:28.657513   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.658364   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.661163   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.662304   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.663565   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:28.657513   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.658364   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.661163   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.662304   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:28.663565   25034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:28.668150    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:28.668150    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:31.218151    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:31.240923    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:31.271844    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.271844    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:31.275477    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:31.301769    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.301769    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:31.305651    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:31.332406    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.332406    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:31.336005    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:31.363591    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.363591    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:31.366859    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:31.394594    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.394594    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:31.397901    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:31.427778    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.427801    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:31.431499    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:31.458018    4268 logs.go:282] 0 containers: []
	W1210 06:07:31.458018    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:31.458052    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:31.458052    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:31.504698    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:31.504698    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:31.560046    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:31.560046    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:31.620436    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:31.620436    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:31.648931    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:31.648931    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:31.727951    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:31.718357   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.719615   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.720837   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.722218   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.723669   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:31.718357   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.719615   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.720837   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.722218   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:31.723669   25189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:34.232606    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:34.257055    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:34.288020    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.288020    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:34.291618    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:34.322496    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.322496    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:34.326328    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:34.354501    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.354501    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:34.358073    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:34.385199    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.385199    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:34.389140    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:34.414316    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.414316    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:34.418016    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:34.445073    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.445073    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:34.448529    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:34.479046    4268 logs.go:282] 0 containers: []
	W1210 06:07:34.479046    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:34.479046    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:34.479113    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:34.540365    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:34.540365    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:34.571107    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:34.571107    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:34.651369    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:34.639849   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.640797   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.643867   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.644948   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.645803   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:34.639849   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.640797   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.643867   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.644948   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:34.645803   25320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:34.651369    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:34.651369    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:34.695236    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:34.695236    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:37.251178    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:37.274825    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:37.305218    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.305218    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:37.308994    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:37.338625    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.338625    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:37.342529    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:37.370849    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.370849    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:37.374620    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:37.403744    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.403744    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:37.407240    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:37.435170    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.435170    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:37.439347    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:37.464351    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.464351    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:37.468757    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:37.497371    4268 logs.go:282] 0 containers: []
	W1210 06:07:37.497371    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:37.497371    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:37.497371    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:37.559564    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:37.559564    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:37.588662    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:37.588662    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:37.667884    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:37.657246   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.658358   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.659261   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.661714   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.662832   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:37.657246   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.658358   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.659261   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.661714   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:37.662832   25475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:37.667913    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:37.667913    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:37.713250    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:37.713250    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:40.270184    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:40.293820    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:40.321872    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.321872    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:40.325799    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:40.355617    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.355617    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:40.361421    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:40.389168    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.389168    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:40.393374    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:40.425493    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.425493    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:40.429344    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:40.458342    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.458342    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:40.462356    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:40.488885    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.488885    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:40.492942    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:40.521222    4268 logs.go:282] 0 containers: []
	W1210 06:07:40.521222    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:40.521222    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:40.521222    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:40.571132    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:40.571132    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:40.622991    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:40.622991    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:40.680418    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:40.680418    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:40.710767    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:40.710767    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:40.786884    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:40.777278   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.778087   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.780838   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.781817   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.782760   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:40.777278   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.778087   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.780838   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.781817   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:40.782760   25637 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:43.292302    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:43.316416    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:43.341307    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.341307    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:43.345027    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:43.370307    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.370307    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:43.374217    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:43.402135    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.402135    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:43.405647    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:43.433991    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.434045    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:43.437705    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:43.465221    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.465221    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:43.468945    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:43.494153    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.494153    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:43.497409    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:43.526559    4268 logs.go:282] 0 containers: []
	W1210 06:07:43.526559    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:43.526559    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:43.526559    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:43.592034    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:43.592034    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:43.621625    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:43.621625    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:43.699225    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:43.688896   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.689744   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.691973   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.692804   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.695050   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:43.688896   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.689744   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.691973   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.692804   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:43.695050   25772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:43.699225    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:43.699225    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:43.742683    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:43.742683    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:46.296260    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:46.320038    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:46.350083    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.350127    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:46.354017    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:46.392667    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.392667    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:46.396040    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:46.423477    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.423477    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:46.427089    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:46.457044    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.457044    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:46.461309    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:46.492133    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.492133    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:46.496367    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:46.523683    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.523683    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:46.528125    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:46.556662    4268 logs.go:282] 0 containers: []
	W1210 06:07:46.556662    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:46.556662    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:46.556662    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:46.622661    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:46.622661    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:46.653087    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:46.653087    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:46.737036    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:46.725117   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.726037   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.729627   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.731599   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.733777   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:46.725117   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.726037   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.729627   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.731599   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:46.733777   25926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:46.737036    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:46.737036    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:46.781873    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:46.781873    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:49.335832    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:49.359246    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:49.391481    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.391481    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:49.395372    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:49.425639    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.425639    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:49.429616    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:49.457273    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.457273    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:49.460755    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:49.490445    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.490445    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:49.496643    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:49.526292    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.526292    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:49.530371    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:49.557314    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.557359    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:49.561590    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:49.591753    4268 logs.go:282] 0 containers: []
	W1210 06:07:49.591753    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:49.591753    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:49.591753    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:49.621767    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:49.621767    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:49.707223    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:49.697858   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.698899   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.699785   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.703604   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.704517   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:49.697858   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.698899   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.699785   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.703604   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:49.704517   26073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:49.707223    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:49.707223    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:49.751158    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:49.751158    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:49.799885    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:49.799885    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:52.366303    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:52.390862    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:52.425737    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.425770    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:52.429505    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:52.457550    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.457550    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:52.461709    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:52.488406    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.488406    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:52.492766    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:52.518703    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.518703    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:52.522666    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:52.550619    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.550619    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:52.554570    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:52.583512    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.583512    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:52.587153    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:52.614737    4268 logs.go:282] 0 containers: []
	W1210 06:07:52.614737    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:52.614737    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:52.614811    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:52.677940    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:52.677940    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:52.709363    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:52.709363    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:52.791705    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:52.781560   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.782422   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.785208   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.786343   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.787080   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:52.781560   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.782422   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.785208   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.786343   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:52.787080   26226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:52.791705    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:52.791705    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:52.835266    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:52.835266    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:55.404989    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:55.433031    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:55.462583    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.462583    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:55.466139    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:55.492223    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.492223    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:55.495759    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:55.523357    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.523357    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:55.530265    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:55.561457    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.561457    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:55.565257    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:55.594178    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.594178    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:55.599162    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:55.627914    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.627914    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:55.632194    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:55.659551    4268 logs.go:282] 0 containers: []
	W1210 06:07:55.659551    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:55.659551    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:55.659551    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:55.705228    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:55.705228    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:07:55.758018    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:55.758018    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:55.819730    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:55.819730    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:55.848800    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:55.848800    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:55.933602    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:55.919237   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.920249   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.924524   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.925340   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.926446   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:55.919237   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.920249   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.924524   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.925340   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:55.926446   26404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:58.439191    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:07:58.463828    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:07:58.497407    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.497407    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:07:58.500686    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:07:58.530436    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.530436    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:07:58.533685    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:07:58.561959    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.561959    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:07:58.566417    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:07:58.596302    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.596302    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:07:58.600866    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:07:58.629840    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.629840    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:07:58.633617    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:07:58.660127    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.660127    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:07:58.663612    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:07:58.692189    4268 logs.go:282] 0 containers: []
	W1210 06:07:58.692189    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:07:58.692189    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:07:58.692189    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:07:58.754556    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:07:58.754556    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:07:58.784251    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:07:58.784251    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:07:58.866899    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:07:58.854125   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.855115   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.856391   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.857985   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.859051   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:07:58.854125   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.855115   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.856391   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.857985   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:07:58.859051   26539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:07:58.866899    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:07:58.866899    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:07:58.914793    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:07:58.914793    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:01.470823    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:01.494469    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:01.522381    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.522381    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:01.528647    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:01.558012    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.558012    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:01.564708    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:01.593835    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.593835    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:01.599056    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:01.623982    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.623982    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:01.627479    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:01.658260    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.658260    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:01.665836    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:01.697664    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.697664    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:01.702191    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:01.729816    4268 logs.go:282] 0 containers: []
	W1210 06:08:01.729816    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:01.729816    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:01.729816    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:01.788909    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:01.788909    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:01.819503    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:01.819503    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:01.901569    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:01.889489   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.890512   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.891524   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.892377   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.894500   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:01.889489   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.890512   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.891524   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.892377   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:01.894500   26694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:01.901569    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:01.901569    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:01.947339    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:01.947339    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:04.502871    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:04.526200    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:04.558543    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.558543    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:04.563525    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:04.595332    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.595332    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:04.598770    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:04.630572    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.630572    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:04.635710    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:04.664369    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.664369    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:04.668951    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:04.699382    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.699382    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:04.702341    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:04.732274    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.732274    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:04.735620    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:04.763772    4268 logs.go:282] 0 containers: []
	W1210 06:08:04.763772    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:04.763772    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:04.763866    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:04.790890    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:04.790890    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:04.872353    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:04.859391   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.860351   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.864058   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.865079   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.866076   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:04.859391   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.860351   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.864058   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.865079   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:04.866076   26841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:04.872353    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:04.872353    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:04.916959    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:04.916959    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:04.965485    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:04.965560    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:07.533039    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:07.559067    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:07.588219    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.588219    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:07.591689    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:07.619350    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.619350    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:07.622996    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:07.652464    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.652464    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:07.657960    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:07.688918    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.688918    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:07.692848    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:07.722521    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.722521    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:07.726603    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:07.755963    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.755963    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:07.760630    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:07.790252    4268 logs.go:282] 0 containers: []
	W1210 06:08:07.790252    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:07.790252    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:07.790327    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:07.852838    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:07.852838    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:07.883838    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:07.883838    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:07.961862    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:07.950474   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.951452   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.952747   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.954027   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.955132   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:07.950474   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.951452   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.952747   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.954027   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:07.955132   26995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:07.961862    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:07.961862    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:08.003991    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:08.003991    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:10.563653    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:10.586319    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:10.613645    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.613645    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:10.617237    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:10.646795    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.646795    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:10.652694    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:10.683833    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.683833    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:10.688294    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:10.718409    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.718409    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:10.722444    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:10.746660    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.746660    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:10.751527    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:10.781904    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.781904    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:10.787205    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:10.814738    4268 logs.go:282] 0 containers: []
	W1210 06:08:10.814738    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:10.814738    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:10.814792    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:10.841682    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:10.841682    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:10.922604    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:10.910990   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.911994   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.912519   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.915063   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.916345   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:10.910990   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.911994   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.912519   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.915063   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:10.916345   27141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:10.922639    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:10.922661    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:10.968300    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:10.968300    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:11.016711    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:11.016711    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:13.584862    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:13.607945    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:13.639757    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.639757    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:13.643362    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:13.673001    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.673001    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:13.676417    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:13.706241    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.706241    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:13.710040    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:13.735617    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.735840    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:13.738750    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:13.768821    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.768821    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:13.772175    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:13.801535    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.801535    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:13.805351    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:13.832881    4268 logs.go:282] 0 containers: []
	W1210 06:08:13.832881    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:13.832881    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:13.832881    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:13.860208    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:13.860208    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:13.946278    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:13.935217   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.936421   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.937560   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.939101   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.940407   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:13.935217   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.936421   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.937560   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.939101   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:13.940407   27289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:13.946278    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:13.946278    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:13.991759    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:13.991759    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:14.045144    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:14.045144    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:16.612310    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:16.638180    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:16.667851    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.667851    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:16.671631    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:16.700699    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.700699    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:16.706277    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:16.734906    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.734906    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:16.738957    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:16.766394    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.766394    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:16.772893    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:16.802581    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.802581    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:16.808905    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:16.836566    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.836566    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:16.840142    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:16.868091    4268 logs.go:282] 0 containers: []
	W1210 06:08:16.868091    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:16.868091    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:16.868091    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:16.897687    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:16.897687    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:16.975509    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:16.963204   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.964299   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.965894   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.966720   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.968954   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:16.963204   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.964299   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.965894   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.966720   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:16.968954   27437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:16.975509    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:16.975509    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:17.020453    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:17.020453    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:17.069748    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:17.069748    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:19.636799    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:19.659733    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:19.690968    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.690968    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:19.694619    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:19.722863    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.722863    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:19.726187    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:19.752031    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.752031    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:19.755396    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:19.783376    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.783376    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:19.786987    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:19.814219    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.814219    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:19.817751    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:19.847004    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.847004    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:19.850402    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:19.881752    4268 logs.go:282] 0 containers: []
	W1210 06:08:19.881752    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:19.881752    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:19.881752    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:19.930019    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:19.930019    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:19.983089    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:19.983089    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:20.045802    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:20.045802    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:20.077460    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:20.077460    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:20.162436    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:20.151708   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.152740   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.154010   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.155291   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.156364   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:20.151708   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.152740   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.154010   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.155291   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:20.156364   27608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:22.668475    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:22.691439    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:22.721661    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.721661    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:22.725309    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:22.754031    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.754031    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:22.758027    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:22.785864    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.785864    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:22.789619    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:22.817384    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.817384    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:22.820727    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:22.851186    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.851186    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:22.855014    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:22.883476    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.883476    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:22.887734    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:22.914588    4268 logs.go:282] 0 containers: []
	W1210 06:08:22.914588    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:22.914588    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:22.914588    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:22.977189    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:22.977189    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:23.007230    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:23.007230    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:23.085937    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:23.073621   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.076302   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.077595   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.078777   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.080139   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:23.073621   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.076302   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.077595   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.078777   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:23.080139   27738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:23.085937    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:23.085937    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:23.128830    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:23.128830    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:25.690109    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:25.713674    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:25.742134    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.742164    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:25.745613    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:25.771702    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.771789    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:25.775334    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:25.803239    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.803239    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:25.806686    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:25.836716    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.836716    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:25.840387    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:25.867927    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.867927    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:25.871435    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:25.898205    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.898205    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:25.901920    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:25.931569    4268 logs.go:282] 0 containers: []
	W1210 06:08:25.931569    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:25.931569    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:25.931569    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:25.995604    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:25.995604    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:26.025733    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:26.025733    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:26.107058    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:26.094116   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.098292   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.099172   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.100188   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.101258   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:26.094116   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.098292   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.099172   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.100188   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:26.101258   27890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:26.107115    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:26.107115    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:26.150320    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:26.150320    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:28.710236    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:28.735443    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:28.764680    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.764680    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:28.768537    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:28.795455    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.795455    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:28.799570    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:28.826729    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.826729    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:28.830406    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:28.859191    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.859191    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:28.862919    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:28.888542    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.888542    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:28.892494    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:28.919951    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.919951    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:28.923351    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:28.952838    4268 logs.go:282] 0 containers: []
	W1210 06:08:28.952838    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:28.952838    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:28.952909    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:29.034485    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:29.023348   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.024187   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.026875   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.028120   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.029114   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:29.023348   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.024187   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.026875   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.028120   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:29.029114   28030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:29.034485    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:29.034485    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:29.079092    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:29.079092    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:29.133555    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:29.133555    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:29.195221    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:29.195221    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:31.733591    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:31.757690    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:31.790674    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.790674    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:31.794674    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:31.825657    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.825721    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:31.829403    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:31.858023    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.858023    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:31.861500    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:31.890867    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.890914    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:31.894490    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:31.922953    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.922953    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:31.927186    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:31.954090    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.954090    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:31.957750    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:31.984886    4268 logs.go:282] 0 containers: []
	W1210 06:08:31.984920    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:31.984920    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:31.984951    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:32.048671    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:32.048671    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:32.079259    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:32.079259    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:32.157323    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:32.146579   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.147719   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.148633   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.150758   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.151551   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:32.146579   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.147719   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.148633   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.150758   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:32.151551   28182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:32.157323    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:32.157323    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:32.203321    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:32.203321    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:34.760108    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:34.782876    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:34.810927    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.810927    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:34.814663    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:34.839714    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.839714    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:34.843722    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:34.870089    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.870089    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:34.873513    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:34.905367    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.905367    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:34.909301    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:34.938914    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.938914    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:34.942767    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:34.972329    4268 logs.go:282] 0 containers: []
	W1210 06:08:34.972329    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:34.976046    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:35.000780    4268 logs.go:282] 0 containers: []
	W1210 06:08:35.000780    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:35.000780    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:35.000838    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:35.065353    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:35.065353    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:35.095634    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:35.095634    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:35.171365    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:35.160656   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.162343   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.163491   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.165073   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.166057   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:35.160656   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.162343   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.163491   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.165073   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:35.166057   28331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:35.171365    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:35.171365    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:35.215605    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:35.215605    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:37.774322    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:37.798677    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:37.827936    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.827990    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:37.831228    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:37.860987    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.861065    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:37.864478    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:37.891877    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.891877    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:37.895716    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:37.920808    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.920808    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:37.924309    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:37.952553    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.952553    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:37.956204    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:37.985826    4268 logs.go:282] 0 containers: []
	W1210 06:08:37.985826    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:37.989201    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:38.017309    4268 logs.go:282] 0 containers: []
	W1210 06:08:38.017309    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:38.017309    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:38.017309    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:38.082876    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:38.083876    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:38.113796    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:38.113821    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:38.196088    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:38.184048   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.187012   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.188966   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.190400   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.191695   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:38.184048   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.187012   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.188966   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.190400   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:38.191695   28478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:38.196123    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:38.196149    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:38.241227    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:38.241227    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:40.798944    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:40.821450    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:40.850414    4268 logs.go:282] 0 containers: []
	W1210 06:08:40.850414    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:40.853927    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:40.881239    4268 logs.go:282] 0 containers: []
	W1210 06:08:40.881239    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:40.885281    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:40.912960    4268 logs.go:282] 0 containers: []
	W1210 06:08:40.912960    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:40.918840    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:40.950469    4268 logs.go:282] 0 containers: []
	W1210 06:08:40.950469    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:40.954401    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:40.982375    4268 logs.go:282] 0 containers: []
	W1210 06:08:40.982375    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:40.986123    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:41.016542    4268 logs.go:282] 0 containers: []
	W1210 06:08:41.016542    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:41.019622    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:41.049577    4268 logs.go:282] 0 containers: []
	W1210 06:08:41.049662    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:41.049662    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:41.049694    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:41.076753    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:41.076753    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:41.160411    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:41.148000   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.148852   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.151925   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.154289   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.155876   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:41.148000   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.148852   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.151925   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.154289   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:41.155876   28627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:41.160445    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:41.160473    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:41.206612    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:41.206612    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:41.253715    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:41.253715    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:43.821604    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:43.845650    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:43.874167    4268 logs.go:282] 0 containers: []
	W1210 06:08:43.874207    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:43.877812    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:43.905508    4268 logs.go:282] 0 containers: []
	W1210 06:08:43.905508    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:43.909372    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:43.939372    4268 logs.go:282] 0 containers: []
	W1210 06:08:43.939426    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:43.942841    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:43.972078    4268 logs.go:282] 0 containers: []
	W1210 06:08:43.972078    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:43.975697    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:44.002329    4268 logs.go:282] 0 containers: []
	W1210 06:08:44.002329    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:44.005898    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:44.035821    4268 logs.go:282] 0 containers: []
	W1210 06:08:44.035821    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:44.039602    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:44.066798    4268 logs.go:282] 0 containers: []
	W1210 06:08:44.066839    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:44.066839    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:44.066839    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:44.128660    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:44.128660    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:44.159235    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:44.159235    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:44.242361    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:44.231367   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.232316   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.235308   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.236181   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.238800   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:44.231367   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.232316   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.235308   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.236181   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:44.238800   28779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:44.242361    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:44.242361    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:44.289326    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:44.289326    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:46.852233    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:46.874656    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:46.903255    4268 logs.go:282] 0 containers: []
	W1210 06:08:46.903255    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:46.907117    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:46.935108    4268 logs.go:282] 0 containers: []
	W1210 06:08:46.935108    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:46.938584    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:46.967525    4268 logs.go:282] 0 containers: []
	W1210 06:08:46.967525    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:46.973772    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:47.001558    4268 logs.go:282] 0 containers: []
	W1210 06:08:47.001558    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:47.005083    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:47.034015    4268 logs.go:282] 0 containers: []
	W1210 06:08:47.034015    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:47.039271    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:47.068459    4268 logs.go:282] 0 containers: []
	W1210 06:08:47.068459    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:47.071981    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:47.102013    4268 logs.go:282] 0 containers: []
	W1210 06:08:47.102013    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:47.102044    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:47.102065    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:47.164592    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:47.164592    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:47.195491    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:47.195491    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:47.278044    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:47.265991   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.268610   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.269567   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.271904   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.272596   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:47.265991   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.268610   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.269567   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.271904   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:47.272596   28930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:47.278044    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:47.278044    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:47.324863    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:47.324863    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:49.880727    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:49.903789    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:49.935342    4268 logs.go:282] 0 containers: []
	W1210 06:08:49.935342    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:49.938737    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:49.965312    4268 logs.go:282] 0 containers: []
	W1210 06:08:49.965312    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:49.968607    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:49.996188    4268 logs.go:282] 0 containers: []
	W1210 06:08:49.996188    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:50.001257    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:50.027750    4268 logs.go:282] 0 containers: []
	W1210 06:08:50.027750    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:50.031128    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:50.062729    4268 logs.go:282] 0 containers: []
	W1210 06:08:50.062803    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:50.067118    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:50.095830    4268 logs.go:282] 0 containers: []
	W1210 06:08:50.095830    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:50.099864    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:50.130283    4268 logs.go:282] 0 containers: []
	W1210 06:08:50.130283    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:50.130283    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:50.130283    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:50.193360    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:50.193360    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:50.221703    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:50.221703    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:50.303176    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:50.293680   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.294854   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.296200   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.298483   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.299446   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:50.293680   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.294854   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.296200   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.298483   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:50.299446   29083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:50.303176    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:50.303176    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:50.370163    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:50.370163    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:52.928303    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:52.953491    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:52.981271    4268 logs.go:282] 0 containers: []
	W1210 06:08:52.981271    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:52.985316    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:53.013881    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.013881    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:53.017036    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:53.045261    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.045261    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:53.049312    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:53.077577    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.077577    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:53.080557    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:53.110750    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.110750    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:53.114132    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:53.141372    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.141372    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:53.145576    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:53.175705    4268 logs.go:282] 0 containers: []
	W1210 06:08:53.175705    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:53.175705    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:53.175705    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:53.237519    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:53.237519    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:53.267260    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:53.267260    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:53.363780    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:53.355380   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.356544   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.357888   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.359124   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.360377   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:53.355380   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.356544   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.357888   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.359124   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:53.360377   29242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:53.363780    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:53.363780    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:53.409834    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:53.409834    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:55.976440    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:56.001300    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:56.033852    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.033852    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:56.037643    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:56.065934    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.065934    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:56.072377    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:56.102560    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.102560    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:56.106392    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:56.143025    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.143025    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:56.149239    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:56.176909    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.176909    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:56.180641    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:56.208166    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.208227    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:56.211221    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:56.240358    4268 logs.go:282] 0 containers: []
	W1210 06:08:56.240358    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:56.240358    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:56.240358    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:56.303618    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:56.303618    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:56.333844    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:56.333844    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:56.416014    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:56.406081   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.406955   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.408179   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.409154   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.410395   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:56.406081   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.406955   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.408179   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.409154   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:56.410395   29397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:08:56.416014    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:56.416014    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:56.461496    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:56.461496    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:59.013428    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:08:59.038379    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:08:59.067727    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.067758    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:08:59.071379    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:08:59.104272    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.104272    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:08:59.107653    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:08:59.133866    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.133866    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:08:59.137442    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:08:59.164317    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.164317    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:08:59.168171    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:08:59.198264    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.198291    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:08:59.202014    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:08:59.229252    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.229252    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:08:59.233058    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:08:59.262804    4268 logs.go:282] 0 containers: []
	W1210 06:08:59.262837    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:08:59.262837    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:08:59.262866    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:08:59.309986    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:08:59.309986    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:08:59.362017    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:08:59.362052    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:08:59.422749    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:08:59.422749    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:08:59.453982    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:08:59.453982    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:08:59.534843    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:08:59.524756   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.525914   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.526844   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.529305   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.530549   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:08:59.524756   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.525914   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.526844   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.529305   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:08:59.530549   29557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:02.039970    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:02.063736    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:02.094049    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.094049    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:02.097680    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:02.124934    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.124934    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:02.130724    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:02.158566    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.158566    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:02.162548    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:02.188736    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.188736    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:02.192205    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:02.222271    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.222271    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:02.225729    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:02.256473    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.256473    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:02.260671    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:02.287011    4268 logs.go:282] 0 containers: []
	W1210 06:09:02.287011    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:02.287011    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:02.287011    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:02.392011    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:02.382734   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.383733   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.385038   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.386241   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.387283   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:02.382734   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.383733   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.385038   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.386241   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:02.387283   29685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:02.392011    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:02.392011    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:02.440008    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:02.440008    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:02.494764    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:02.494764    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:02.553322    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:02.553322    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:05.090291    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:05.112936    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:05.141630    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.141630    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:05.144882    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:05.180128    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.180128    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:05.184542    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:05.213219    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.213219    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:05.216935    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:05.244351    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.244351    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:05.248038    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:05.277710    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.277760    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:05.281504    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:05.310297    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.310297    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:05.314071    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:05.352094    4268 logs.go:282] 0 containers: []
	W1210 06:09:05.352094    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:05.352094    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:05.352094    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:05.398783    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:05.398896    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:05.458685    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:05.458685    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:05.489319    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:05.489319    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:05.565657    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:05.556044   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.557996   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.559537   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.561579   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.562708   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:05.556044   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.557996   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.559537   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.561579   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:05.562708   29854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:05.565657    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:05.565657    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:08.115745    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:08.138736    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:08.171066    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.171066    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:08.174894    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:08.201941    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.201941    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:08.205547    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:08.233859    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.233859    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:08.237566    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:08.264996    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.264996    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:08.269259    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:08.294641    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.294641    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:08.298901    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:08.350200    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.350200    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:08.356240    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:08.383315    4268 logs.go:282] 0 containers: []
	W1210 06:09:08.383315    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:08.383354    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:08.383372    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:08.448982    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:08.448982    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:08.479093    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:08.479093    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:08.560338    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:08.549727   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.550675   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.553111   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.554353   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.555159   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:08.549727   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.550675   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.553111   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.554353   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:08.555159   29990 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:08.560338    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:08.560338    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:08.606173    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:08.606173    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:11.159744    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:11.183765    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:11.210674    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.210698    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:11.214341    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:11.240117    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.240117    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:11.243522    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:11.272551    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.272551    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:11.276401    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:11.305619    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.305619    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:11.309310    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:11.360405    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.360447    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:11.363925    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:11.393251    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.393251    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:11.397006    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:11.426962    4268 logs.go:282] 0 containers: []
	W1210 06:09:11.426962    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:11.426962    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:11.426962    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:11.477327    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:11.477327    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:11.532161    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:11.532161    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:11.592212    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:11.592212    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:11.622686    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:11.622686    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:11.705726    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:11.693925   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.694871   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.698826   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.701149   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.702201   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:11.693925   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.694871   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.698826   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.701149   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:11.702201   30162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:14.210675    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:14.234399    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:14.264863    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.264863    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:14.268775    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:14.300413    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.300413    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:14.304487    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:14.346847    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.346847    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:14.350643    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:14.380435    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.380435    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:14.384376    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:14.412797    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.412797    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:14.416519    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:14.447397    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.447397    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:14.450969    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:14.478632    4268 logs.go:282] 0 containers: []
	W1210 06:09:14.478695    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:14.478695    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:14.478695    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:14.528915    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:14.528915    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:14.588962    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:14.588962    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:14.618677    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:14.618677    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:14.700289    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:14.688765   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.691863   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.695446   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.696305   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.697431   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:14.688765   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.691863   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.695446   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.696305   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:14.697431   30308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:14.700289    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:14.700289    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:17.249092    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:17.272763    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:17.300862    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.300952    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:17.306099    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:17.346725    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.346725    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:17.350199    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:17.377982    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.377982    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:17.380998    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:17.409995    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.409995    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:17.414294    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:17.442988    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.442988    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:17.449120    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:17.475982    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.475982    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:17.479552    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:17.506308    4268 logs.go:282] 0 containers: []
	W1210 06:09:17.506308    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:17.506308    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:17.506308    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:17.553141    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:17.553141    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:17.607169    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:17.607169    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:17.668742    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:17.668742    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:17.697789    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:17.697789    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:17.779510    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:17.770911   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.772114   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.773487   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.774333   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.776764   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:17.770911   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.772114   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.773487   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.774333   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:17.776764   30458 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:20.283521    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:20.307295    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:20.338053    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.338053    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:20.341656    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:20.372543    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.372543    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:20.376481    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:20.403212    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.403212    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:20.406617    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:20.433422    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.433422    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:20.437081    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:20.465523    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.465523    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:20.469716    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:20.497769    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.497769    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:20.501184    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:20.528203    4268 logs.go:282] 0 containers: []
	W1210 06:09:20.528203    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:20.528203    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:20.528203    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:20.604309    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:20.596677   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.597696   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.598827   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.599955   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.601237   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:20.596677   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.597696   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.598827   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.599955   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:20.601237   30586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:20.604309    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:20.604309    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:20.649121    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:20.649121    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:20.700336    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:20.700336    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:20.761156    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:20.761156    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:23.296453    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:23.318440    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:23.351977    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.351977    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:23.355449    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:23.384390    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.384413    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:23.387748    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:23.416613    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.416613    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:23.422740    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:23.447410    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.447410    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:23.450859    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:23.481298    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.481298    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:23.484812    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:23.510855    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.510855    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:23.514267    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:23.543042    4268 logs.go:282] 0 containers: []
	W1210 06:09:23.543042    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:23.543042    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:23.543042    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:23.608264    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:23.608264    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:23.639456    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:23.639491    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:23.717275    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:23.706870   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.707871   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.711802   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.713025   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.715049   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:23.706870   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.707871   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.711802   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.713025   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:23.715049   30738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:23.717275    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:23.717319    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:23.761563    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:23.761563    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:26.321131    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:26.344893    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:26.376780    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.376780    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:26.380359    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:26.408268    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.408268    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:26.411660    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:26.440862    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.440862    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:26.444048    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:26.473546    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.473546    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:26.476599    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:26.505151    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.505151    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:26.508748    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:26.538121    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.538121    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:26.542550    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:26.569122    4268 logs.go:282] 0 containers: []
	W1210 06:09:26.569122    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:26.569122    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:26.569122    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:26.629615    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:26.629615    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:26.660648    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:26.660648    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:26.741888    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:26.730118   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.731561   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.735001   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.735931   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.737367   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:26.730118   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.731561   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.735001   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.735931   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:26.737367   30881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:26.741888    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:26.741888    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:26.787954    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:26.787954    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:29.348252    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:29.372474    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:29.401265    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.401265    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:29.404730    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:29.435756    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.435805    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:29.439300    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:29.470279    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.470279    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:29.474091    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:29.502410    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.502410    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:29.505917    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:29.535595    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.535595    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:29.539532    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:29.568556    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.568556    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:29.572020    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:29.599739    4268 logs.go:282] 0 containers: []
	W1210 06:09:29.599739    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:29.599739    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:29.599739    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:29.661483    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:29.661483    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:29.691565    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:29.691565    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:29.774718    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:29.764825   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.765629   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.768157   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.769097   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.770255   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:29.764825   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.765629   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.768157   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.769097   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:29.770255   31028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:29.774718    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:29.774718    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:29.816878    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:29.816878    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:32.374472    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:32.397027    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:32.429904    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.429904    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:32.433647    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:32.460698    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.460756    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:32.464368    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:32.491682    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.491682    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:32.495066    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:32.523531    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.523531    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:32.526773    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:32.557102    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.557102    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:32.563482    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:32.591959    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.591959    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:32.595725    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:32.625486    4268 logs.go:282] 0 containers: []
	W1210 06:09:32.625486    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:32.625486    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:32.625486    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:32.688451    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:32.688451    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:32.719004    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:32.719004    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:32.800020    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:32.788607   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.789314   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.791558   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.792611   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.793305   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:32.788607   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.789314   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.791558   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.792611   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:32.793305   31174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:32.800020    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:32.800020    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:32.849061    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:32.849061    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:35.404633    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:35.429425    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:35.458232    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.458277    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:35.462316    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:35.489097    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.489097    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:35.492725    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:35.522979    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.522979    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:35.526587    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:35.555948    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.555948    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:35.559915    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:35.589220    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.589220    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:35.592883    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:35.619789    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.619850    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:35.622872    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:35.649510    4268 logs.go:282] 0 containers: []
	W1210 06:09:35.649534    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:35.649534    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:35.649534    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:35.714882    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:35.715881    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:35.745666    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:35.745666    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:35.825749    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:35.812454   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.813402   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.819556   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.820578   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.821180   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:35.812454   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.813402   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.819556   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.820578   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:35.821180   31320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:35.825749    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:35.825749    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:35.871102    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:35.871102    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:38.430887    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:38.453030    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:38.484706    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.484706    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:38.488140    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:38.517210    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.517210    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:38.521162    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:38.549348    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.549348    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:38.553103    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:38.580109    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.580109    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:38.583794    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:38.613855    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.613934    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:38.618771    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:38.647097    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.647097    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:38.650932    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:38.680610    4268 logs.go:282] 0 containers: []
	W1210 06:09:38.680610    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:38.680610    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:38.680682    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:38.758813    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:38.749300   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.750109   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.753125   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.753957   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.756268   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:38.749300   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.750109   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.753125   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.753957   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:38.756268   31459 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:38.758813    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:38.758813    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:38.807873    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:38.807873    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:38.867039    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:38.867067    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:38.926759    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:38.926759    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:41.462739    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:41.490464    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:41.518622    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.518622    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:41.524470    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:41.551685    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.551685    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:41.556977    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:41.584962    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.584962    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:41.588808    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:41.620594    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.620594    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:41.624185    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:41.656800    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.656800    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:41.659821    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:41.692628    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.692628    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:41.696287    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:41.726090    4268 logs.go:282] 0 containers: []
	W1210 06:09:41.726090    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:41.726090    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:41.726090    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:41.803427    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:41.793678   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.794849   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.796092   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.797004   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.799523   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:41.793678   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.794849   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.796092   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.797004   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:41.799523   31605 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:41.803427    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:41.803427    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:41.849170    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:41.849170    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:41.903654    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:41.903654    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:41.962299    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:41.962299    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:44.500876    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:44.523403    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:44.554849    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.554849    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:44.558352    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:44.588012    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.588012    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:44.591883    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:44.617831    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.617831    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:44.621490    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:44.648689    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.648689    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:44.652490    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:44.684042    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.684042    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:44.687539    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:44.716817    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.716856    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:44.720738    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:44.747250    4268 logs.go:282] 0 containers: []
	W1210 06:09:44.747250    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:44.747250    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:44.747318    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:44.798396    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:44.798396    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:44.858678    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:44.858678    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:44.888995    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:44.888995    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:44.964778    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:44.955796   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.956638   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.958906   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.960018   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.961253   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:44.955796   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.956638   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.958906   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.960018   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:44.961253   31775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:44.964778    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:44.964778    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:47.517925    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:47.541890    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:47.573716    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.573716    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:47.577684    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:47.606333    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.606333    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:47.610098    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:47.635733    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.635733    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:47.639327    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:47.669406    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.669406    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:47.673219    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:47.700633    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.700633    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:47.705121    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:47.733323    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.733323    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:47.737104    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:47.763071    4268 logs.go:282] 0 containers: []
	W1210 06:09:47.763071    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:47.763071    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:47.763140    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:47.826821    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:47.826821    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:47.856590    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:47.856590    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:47.933339    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:47.922383   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.923323   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.927777   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.928818   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.930519   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:47.922383   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.923323   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.927777   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.928818   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:47.930519   31912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:47.933339    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:47.933339    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:47.979012    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:47.979012    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:50.532699    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:50.557240    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 06:09:50.585813    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.585813    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:09:50.589369    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 06:09:50.622124    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.622124    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:09:50.625576    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 06:09:50.650920    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.650920    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:09:50.653943    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 06:09:50.682545    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.682545    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:09:50.686340    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 06:09:50.715893    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.715893    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:09:50.719099    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 06:09:50.748297    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.748297    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:09:50.751451    4268 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 06:09:50.779846    4268 logs.go:282] 0 containers: []
	W1210 06:09:50.779866    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:09:50.779890    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:09:50.779890    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:09:50.830198    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:09:50.830198    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:09:50.891330    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:09:50.891330    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:09:50.921331    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:09:50.921331    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:09:51.001029    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:09:50.991827   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.992701   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.996634   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.997913   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.999128   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:09:50.991827   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.992701   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.996634   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.997913   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:09:50.999128   32076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:09:51.001029    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:09:51.001029    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 06:09:53.554507    4268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:09:53.573659    4268 kubeadm.go:602] duration metric: took 4m3.2099315s to restartPrimaryControlPlane
	W1210 06:09:53.573659    4268 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 06:09:53.578070    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1210 06:09:54.057699    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:09:54.081355    4268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 06:09:54.095306    4268 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:09:54.099578    4268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:09:54.113717    4268 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:09:54.113717    4268 kubeadm.go:158] found existing configuration files:
	
	I1210 06:09:54.118539    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:09:54.131350    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:09:54.135225    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:09:54.152710    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:09:54.164770    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:09:54.168898    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:09:54.185476    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:09:54.198490    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:09:54.202839    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:09:54.221180    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:09:54.234980    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:09:54.239197    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:09:54.256185    4268 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:09:54.367900    4268 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 06:09:54.450675    4268 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:09:54.549884    4268 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:13:55.304144    4268 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:13:55.304213    4268 kubeadm.go:319] 
	I1210 06:13:55.304353    4268 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:13:55.308106    4268 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:13:55.308252    4268 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:13:55.308389    4268 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:13:55.308682    4268 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 06:13:55.308682    4268 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 06:13:55.308682    4268 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 06:13:55.308682    4268 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 06:13:55.308682    4268 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 06:13:55.308682    4268 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 06:13:55.309221    4268 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 06:13:55.309347    4268 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 06:13:55.309347    4268 kubeadm.go:319] CONFIG_INET: enabled
	I1210 06:13:55.309347    4268 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 06:13:55.309347    4268 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 06:13:55.309347    4268 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 06:13:55.309881    4268 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 06:13:55.310005    4268 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 06:13:55.310536    4268 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 06:13:55.310642    4268 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 06:13:55.310721    4268 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 06:13:55.310721    4268 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 06:13:55.310721    4268 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 06:13:55.310721    4268 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 06:13:55.310721    4268 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 06:13:55.310721    4268 kubeadm.go:319] OS: Linux
	I1210 06:13:55.310721    4268 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:13:55.311254    4268 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:13:55.311367    4268 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:13:55.311538    4268 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:13:55.311670    4268 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:13:55.311750    4268 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:13:55.311824    4268 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:13:55.311865    4268 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:13:55.311865    4268 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:13:55.311865    4268 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:13:55.311865    4268 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:13:55.311865    4268 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:13:55.312446    4268 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:13:55.316886    4268 out.go:252]   - Generating certificates and keys ...
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:13:55.316886    4268 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:13:55.317855    4268 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:13:55.317855    4268 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:13:55.317855    4268 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:13:55.317855    4268 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:13:55.317855    4268 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:13:55.321599    4268 out.go:252]   - Booting up control plane ...
	I1210 06:13:55.322123    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:13:55.322197    4268 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:13:55.323161    4268 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:13:55.323161    4268 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:13:55.323161    4268 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000948554s
	I1210 06:13:55.323161    4268 kubeadm.go:319] 
	I1210 06:13:55.323161    4268 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:13:55.323161    4268 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:13:55.323161    4268 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:13:55.323161    4268 kubeadm.go:319] 
	I1210 06:13:55.323161    4268 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:13:55.323161    4268 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:13:55.324159    4268 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:13:55.324159    4268 kubeadm.go:319] 
	W1210 06:13:55.324159    4268 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000948554s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 06:13:55.329361    4268 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1210 06:13:55.788774    4268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:13:55.807235    4268 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 06:13:55.812328    4268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 06:13:55.824166    4268 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 06:13:55.824166    4268 kubeadm.go:158] found existing configuration files:
	
	I1210 06:13:55.829624    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 06:13:55.842900    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 06:13:55.846743    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 06:13:55.863007    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 06:13:55.876646    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 06:13:55.881322    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 06:13:55.900836    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 06:13:55.916668    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 06:13:55.921481    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 06:13:55.939813    4268 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 06:13:55.954759    4268 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 06:13:55.960058    4268 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 06:13:55.976998    4268 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 06:13:56.092783    4268 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 06:13:56.183907    4268 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 06:13:56.283504    4268 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 06:17:56.874768    4268 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 06:17:56.874768    4268 kubeadm.go:319] 
	I1210 06:17:56.875332    4268 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 06:17:56.883860    4268 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 06:17:56.883860    4268 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 06:17:56.883860    4268 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 06:17:56.883860    4268 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 06:17:56.884428    4268 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 06:17:56.884448    4268 kubeadm.go:319] CONFIG_INET: enabled
	I1210 06:17:56.884973    4268 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 06:17:56.885025    4268 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 06:17:56.885550    4268 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 06:17:56.885585    4268 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 06:17:56.885585    4268 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 06:17:56.885585    4268 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 06:17:56.885585    4268 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 06:17:56.885585    4268 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 06:17:56.886100    4268 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] OS: Linux
	I1210 06:17:56.886147    4268 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 06:17:56.886147    4268 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 06:17:56.886670    4268 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 06:17:56.886723    4268 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 06:17:56.887297    4268 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 06:17:56.887297    4268 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 06:17:56.887297    4268 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 06:17:56.890313    4268 out.go:252]   - Generating certificates and keys ...
	I1210 06:17:56.890313    4268 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 06:17:56.890313    4268 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 06:17:56.890313    4268 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 06:17:56.890313    4268 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 06:17:56.890917    4268 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 06:17:56.891009    4268 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 06:17:56.891147    4268 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 06:17:56.891147    4268 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 06:17:56.891147    4268 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 06:17:56.891147    4268 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 06:17:56.891147    4268 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 06:17:56.891709    4268 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 06:17:56.892230    4268 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 06:17:56.892299    4268 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 06:17:56.896667    4268 out.go:252]   - Booting up control plane ...
	I1210 06:17:56.896667    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 06:17:56.896667    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 06:17:56.897260    4268 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 06:17:56.897260    4268 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 06:17:56.897260    4268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 06:17:56.897780    4268 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 06:17:56.897839    4268 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 06:17:56.897839    4268 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 06:17:56.897839    4268 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 06:17:56.897839    4268 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 06:17:56.897839    4268 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00077699s
	I1210 06:17:56.897839    4268 kubeadm.go:319] 
	I1210 06:17:56.897839    4268 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 06:17:56.897839    4268 kubeadm.go:319] 	- The kubelet is not running
	I1210 06:17:56.897839    4268 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 06:17:56.897839    4268 kubeadm.go:319] 
	I1210 06:17:56.898801    4268 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 06:17:56.898801    4268 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 06:17:56.898801    4268 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 06:17:56.898801    4268 kubeadm.go:319] 
	I1210 06:17:56.898801    4268 kubeadm.go:403] duration metric: took 12m6.5812244s to StartCluster
	I1210 06:17:56.898801    4268 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 06:17:56.902808    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 06:17:57.138118    4268 cri.go:89] found id: ""
	I1210 06:17:57.138148    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.138172    4268 logs.go:284] No container was found matching "kube-apiserver"
	I1210 06:17:57.138172    4268 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 06:17:57.142698    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 06:17:57.185021    4268 cri.go:89] found id: ""
	I1210 06:17:57.185021    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.185021    4268 logs.go:284] No container was found matching "etcd"
	I1210 06:17:57.185092    4268 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 06:17:57.189241    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 06:17:57.228303    4268 cri.go:89] found id: ""
	I1210 06:17:57.228350    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.228350    4268 logs.go:284] No container was found matching "coredns"
	I1210 06:17:57.228350    4268 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 06:17:57.233381    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 06:17:57.304677    4268 cri.go:89] found id: ""
	I1210 06:17:57.304677    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.304677    4268 logs.go:284] No container was found matching "kube-scheduler"
	I1210 06:17:57.304677    4268 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 06:17:57.309206    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 06:17:57.355436    4268 cri.go:89] found id: ""
	I1210 06:17:57.355436    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.355436    4268 logs.go:284] No container was found matching "kube-proxy"
	I1210 06:17:57.355436    4268 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 06:17:57.359252    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 06:17:57.404878    4268 cri.go:89] found id: ""
	I1210 06:17:57.404878    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.404878    4268 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 06:17:57.404878    4268 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 06:17:57.409876    4268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 06:17:57.451416    4268 cri.go:89] found id: ""
	I1210 06:17:57.451416    4268 logs.go:282] 0 containers: []
	W1210 06:17:57.451499    4268 logs.go:284] No container was found matching "kindnet"
	I1210 06:17:57.451499    4268 logs.go:123] Gathering logs for container status ...
	I1210 06:17:57.451499    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 06:17:57.506664    4268 logs.go:123] Gathering logs for kubelet ...
	I1210 06:17:57.506764    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 06:17:57.578699    4268 logs.go:123] Gathering logs for dmesg ...
	I1210 06:17:57.578699    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 06:17:57.610293    4268 logs.go:123] Gathering logs for describe nodes ...
	I1210 06:17:57.610293    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 06:17:57.852641    4268 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:17:57.840732   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.841622   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.844268   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.845648   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.846764   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 06:17:57.840732   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.841622   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.844268   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.845648   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:17:57.846764   40093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 06:17:57.852641    4268 logs.go:123] Gathering logs for Docker ...
	I1210 06:17:57.852641    4268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1210 06:17:57.899832    4268 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077699s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 06:17:57.899832    4268 out.go:285] * 
	W1210 06:17:57.899832    4268 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077699s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:17:57.900356    4268 out.go:285] * 
	W1210 06:17:57.902683    4268 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 06:17:57.916933    4268 out.go:203] 
	W1210 06:17:57.920352    4268 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00077699s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 06:17:57.920907    4268 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 06:17:57.921055    4268 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 06:17:57.924778    4268 out.go:203] 
	
	
	==> Docker <==
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939273296Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939278496Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939300298Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 06:05:46 functional-871500 dockerd[21148]: time="2025-12-10T06:05:46.939330401Z" level=info msg="Initializing buildkit"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.048285619Z" level=info msg="Completed buildkit initialization"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057400499Z" level=info msg="Daemon has completed initialization"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057637121Z" level=info msg="API listen on [::]:2376"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057662524Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 06:05:47 functional-871500 dockerd[21148]: time="2025-12-10T06:05:47.057681026Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 06:05:47 functional-871500 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 06:05:47 functional-871500 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 06:05:47 functional-871500 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 10 06:05:47 functional-871500 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 10 06:05:47 functional-871500 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Loaded network plugin cni"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 06:05:47 functional-871500 cri-dockerd[21480]: time="2025-12-10T06:05:47Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 06:05:47 functional-871500 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 06:20:11.518170   42935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:20:11.519221   42935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:20:11.520145   42935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:20:11.524474   42935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 06:20:11.525145   42935 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000820] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000756] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000769] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000760] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001067] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 06:05] CPU: 0 PID: 66176 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000804] RIP: 0033:0x7faea69bcb20
	[  +0.000404] Code: Unable to access opcode bytes at RIP 0x7faea69bcaf6.
	[  +0.000646] RSP: 002b:00007ffe61c16590 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000914] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000859] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000854] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000785] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000766] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000758] FS:  0000000000000000 GS:  0000000000000000
	[  +0.894437] CPU: 10 PID: 66302 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000900] RIP: 0033:0x7fd9e8de1b20
	[  +0.000422] Code: Unable to access opcode bytes at RIP 0x7fd9e8de1af6.
	[  +0.000734] RSP: 002b:00007ffc83151e80 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000839] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000834] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000828] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000825] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000826] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000826] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 06:20:11 up  1:48,  0 user,  load average: 0.41, 0.31, 0.42
	Linux functional-871500 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 06:20:08 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:20:08 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 495.
	Dec 10 06:20:08 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:08 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:09 functional-871500 kubelet[42767]: E1210 06:20:09.004708   42767 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:20:09 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:20:09 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:20:09 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 496.
	Dec 10 06:20:09 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:09 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:09 functional-871500 kubelet[42781]: E1210 06:20:09.763526   42781 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:20:09 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:20:09 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:20:10 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 497.
	Dec 10 06:20:10 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:10 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:10 functional-871500 kubelet[42808]: E1210 06:20:10.536151   42808 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:20:10 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:20:10 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 06:20:11 functional-871500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 498.
	Dec 10 06:20:11 functional-871500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:11 functional-871500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 06:20:11 functional-871500 kubelet[42910]: E1210 06:20:11.258682   42910 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 06:20:11 functional-871500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 06:20:11 functional-871500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-871500 -n functional-871500: exit status 2 (604.0872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-871500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (54.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-871500 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-871500 create deployment hello-node --image kicbase/echo-server: exit status 1 (98.4162ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://127.0.0.1:50086/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": EOF

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-871500 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-871500 service list: exit status 103 (493.9871ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-871500 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-871500"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-windows-amd64.exe -p functional-871500 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-871500 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-871500\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-871500 service list -o json: exit status 103 (486.8463ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-871500 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-871500"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-windows-amd64.exe -p functional-871500 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-871500 service --namespace=default --https --url hello-node: exit status 103 (498.3808ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-871500 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-871500"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-871500 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-871500 service hello-node --url --format={{.IP}}: exit status 103 (496.0762ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-871500 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-871500"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-871500 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-871500 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-871500\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-871500 service hello-node --url: exit status 103 (478.7304ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-871500 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-871500"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-871500 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-871500 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-871500"
functional_test.go:1579: failed to parse "* The control-plane node functional-871500 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-871500\"": parse "* The control-plane node functional-871500 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-871500\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv/powershell (2.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-871500 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-871500"
functional_test.go:514: (dbg) Non-zero exit: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-871500 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-871500": exit status 1 (2.8252851s)

                                                
                                                
-- stdout --
	functional-871500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	docker-env: in-use
	

                                                
                                                
-- /stdout --
functional_test.go:520: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv/powershell (2.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-871500 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-871500 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1210 06:21:01.748845    1444 out.go:360] Setting OutFile to fd 1588 ...
I1210 06:21:01.821450    1444 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:21:01.821450    1444 out.go:374] Setting ErrFile to fd 1844...
I1210 06:21:01.821450    1444 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:21:01.834410    1444 mustload.go:66] Loading cluster: functional-871500
I1210 06:21:01.834976    1444 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1210 06:21:01.842295    1444 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
I1210 06:21:01.896919    1444 host.go:66] Checking if "functional-871500" exists ...
I1210 06:21:01.900594    1444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-871500
I1210 06:21:01.948996    1444 api_server.go:166] Checking apiserver status ...
I1210 06:21:01.953385    1444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1210 06:21:01.956621    1444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
I1210 06:21:02.010902    1444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
W1210 06:21:02.152844    1444 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1210 06:21:02.162557    1444 out.go:179] * The control-plane node functional-871500 apiserver is not running: (state=Stopped)
I1210 06:21:02.167939    1444 out.go:179]   To start a cluster, run: "minikube start -p functional-871500"

                                                
                                                
stdout: * The control-plane node functional-871500 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-871500"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-871500 tunnel --alsologtostderr] ...
helpers_test.go:520: unable to terminate pid 9320: Access is denied.
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-871500 tunnel --alsologtostderr] stdout:
* The control-plane node functional-871500 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-871500"
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-871500 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-871500 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-871500 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-871500 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (20.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-871500 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-871500 apply -f testdata\testsvc.yaml: exit status 1 (20.1724461s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\testsvc.yaml": error validating data: failed to download openapi: Get "https://127.0.0.1:50086/openapi/v2?timeout=32s": EOF; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-871500 apply -f testdata\testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (20.18s)

                                                
                                    
x
+
TestKubernetesUpgrade (833.96s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-458400 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-458400 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker: (49.349733s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-458400
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-458400: (5.9354704s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-458400 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-458400 status --format={{.Host}}: exit status 7 (212.4184ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-458400 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker
E1210 07:08:02.323845   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-458400 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker: exit status 109 (12m41.5249078s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-458400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-458400" primary control-plane node in "kubernetes-upgrade-458400" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:07:49.524256    7364 out.go:360] Setting OutFile to fd 984 ...
	I1210 07:07:49.567650    7364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:07:49.567650    7364 out.go:374] Setting ErrFile to fd 1136...
	I1210 07:07:49.567650    7364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:07:49.581327    7364 out.go:368] Setting JSON to false
	I1210 07:07:49.584788    7364 start.go:133] hostinfo: {"hostname":"minikube4","uptime":9401,"bootTime":1765341068,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 07:07:49.584788    7364 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 07:07:49.835784    7364 out.go:179] * [kubernetes-upgrade-458400] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 07:07:49.839379    7364 notify.go:221] Checking for updates...
	I1210 07:07:49.843155    7364 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:07:49.846424    7364 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:07:49.888429    7364 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 07:07:49.891253    7364 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:07:49.897372    7364 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:07:49.903253    7364 config.go:182] Loaded profile config "kubernetes-upgrade-458400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I1210 07:07:49.903845    7364 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:07:50.023159    7364 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 07:07:50.026794    7364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:07:50.304858    7364 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:111 SystemTime:2025-12-10 07:07:50.279077373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 In
dexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDesc
ription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Prog
ram Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:07:50.308856    7364 out.go:179] * Using the docker driver based on existing profile
	I1210 07:07:50.314856    7364 start.go:309] selected driver: docker
	I1210 07:07:50.314856    7364 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-458400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-458400 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:07:50.314856    7364 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:07:50.365039    7364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:07:50.596706    7364 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:true NGoroutines:120 SystemTime:2025-12-10 07:07:50.580462873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 In
dexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDesc
ription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Prog
ram Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:07:50.597708    7364 cni.go:84] Creating CNI manager for ""
	I1210 07:07:50.597708    7364 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 07:07:50.597708    7364 start.go:353] cluster config:
	{Name:kubernetes-upgrade-458400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-458400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:07:50.602700    7364 out.go:179] * Starting "kubernetes-upgrade-458400" primary control-plane node in "kubernetes-upgrade-458400" cluster
	I1210 07:07:50.604700    7364 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 07:07:50.607700    7364 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:07:50.608701    7364 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 07:07:50.608701    7364 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 07:07:50.608701    7364 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1210 07:07:50.609706    7364 cache.go:65] Caching tarball of preloaded images
	I1210 07:07:50.609706    7364 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1210 07:07:50.609706    7364 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1210 07:07:50.609706    7364 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-458400\config.json ...
	I1210 07:07:50.679706    7364 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:07:50.679706    7364 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 07:07:50.679706    7364 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:07:50.679706    7364 start.go:360] acquireMachinesLock for kubernetes-upgrade-458400: {Name:mk0a7710e905fa9c2ff6723cf933b6d016f056b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:07:50.679706    7364 start.go:364] duration metric: took 0s to acquireMachinesLock for "kubernetes-upgrade-458400"
	I1210 07:07:50.679706    7364 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:07:50.679706    7364 fix.go:54] fixHost starting: 
	I1210 07:07:50.686703    7364 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-458400 --format={{.State.Status}}
	I1210 07:07:50.737385    7364 fix.go:112] recreateIfNeeded on kubernetes-upgrade-458400: state=Stopped err=<nil>
	W1210 07:07:50.737430    7364 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:07:50.739697    7364 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-458400" ...
	I1210 07:07:50.745400    7364 cli_runner.go:164] Run: docker start kubernetes-upgrade-458400
	I1210 07:07:51.324842    7364 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-458400 --format={{.State.Status}}
	I1210 07:07:51.378021    7364 kic.go:430] container "kubernetes-upgrade-458400" state is running.
	I1210 07:07:51.383030    7364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-458400
	I1210 07:07:51.436022    7364 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-458400\config.json ...
	I1210 07:07:51.437014    7364 machine.go:94] provisionDockerMachine start ...
	I1210 07:07:51.441020    7364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-458400
	I1210 07:07:51.497028    7364 main.go:143] libmachine: Using SSH client type: native
	I1210 07:07:51.498014    7364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 55048 <nil> <nil>}
	I1210 07:07:51.498014    7364 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:07:51.500017    7364 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:07:54.673087    7364 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-458400
	
	I1210 07:07:54.673087    7364 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-458400"
	I1210 07:07:54.677355    7364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-458400
	I1210 07:07:54.734058    7364 main.go:143] libmachine: Using SSH client type: native
	I1210 07:07:54.735228    7364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 55048 <nil> <nil>}
	I1210 07:07:54.735228    7364 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-458400 && echo "kubernetes-upgrade-458400" | sudo tee /etc/hostname
	I1210 07:07:54.937693    7364 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-458400
	
	I1210 07:07:54.941061    7364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-458400
	I1210 07:07:54.996509    7364 main.go:143] libmachine: Using SSH client type: native
	I1210 07:07:54.996509    7364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 55048 <nil> <nil>}
	I1210 07:07:54.996509    7364 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-458400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-458400/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-458400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:07:55.164516    7364 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:07:55.164516    7364 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 07:07:55.164516    7364 ubuntu.go:190] setting up certificates
	I1210 07:07:55.164516    7364 provision.go:84] configureAuth start
	I1210 07:07:55.168033    7364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-458400
	I1210 07:07:55.222522    7364 provision.go:143] copyHostCerts
	I1210 07:07:55.222632    7364 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 07:07:55.222632    7364 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 07:07:55.222632    7364 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 07:07:55.224055    7364 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 07:07:55.224086    7364 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 07:07:55.224276    7364 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 07:07:55.225285    7364 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 07:07:55.225329    7364 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 07:07:55.225374    7364 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 07:07:55.225936    7364 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-458400 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-458400 localhost minikube]
	I1210 07:07:55.268241    7364 provision.go:177] copyRemoteCerts
	I1210 07:07:55.271477    7364 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:07:55.274701    7364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-458400
	I1210 07:07:55.329174    7364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55048 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-458400\id_rsa Username:docker}
	I1210 07:07:55.459090    7364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:07:55.486776    7364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:07:55.521142    7364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I1210 07:07:55.554077    7364 provision.go:87] duration metric: took 389.5549ms to configureAuth
	I1210 07:07:55.554137    7364 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:07:55.554188    7364 config.go:182] Loaded profile config "kubernetes-upgrade-458400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:07:55.558530    7364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-458400
	I1210 07:07:55.616954    7364 main.go:143] libmachine: Using SSH client type: native
	I1210 07:07:55.617518    7364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 55048 <nil> <nil>}
	I1210 07:07:55.617518    7364 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 07:07:55.793750    7364 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 07:07:55.793750    7364 ubuntu.go:71] root file system type: overlay
	I1210 07:07:55.793750    7364 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 07:07:55.799485    7364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-458400
	I1210 07:07:55.857014    7364 main.go:143] libmachine: Using SSH client type: native
	I1210 07:07:55.858133    7364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 55048 <nil> <nil>}
	I1210 07:07:55.858254    7364 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 07:07:56.050049    7364 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 07:07:56.057371    7364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-458400
	I1210 07:07:56.114712    7364 main.go:143] libmachine: Using SSH client type: native
	I1210 07:07:56.115262    7364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 55048 <nil> <nil>}
	I1210 07:07:56.115262    7364 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 07:07:56.301519    7364 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:07:56.301519    7364 machine.go:97] duration metric: took 4.8644298s to provisionDockerMachine
	I1210 07:07:56.301519    7364 start.go:293] postStartSetup for "kubernetes-upgrade-458400" (driver="docker")
	I1210 07:07:56.301519    7364 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:07:56.306425    7364 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:07:56.309649    7364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-458400
	I1210 07:07:56.361826    7364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55048 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-458400\id_rsa Username:docker}
	I1210 07:07:56.493110    7364 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:07:56.501813    7364 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:07:56.501813    7364 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:07:56.501813    7364 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 07:07:56.501813    7364 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 07:07:56.502458    7364 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 07:07:56.506401    7364 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:07:56.520465    7364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 07:07:56.552329    7364 start.go:296] duration metric: took 250.8064ms for postStartSetup
	I1210 07:07:56.556746    7364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:07:56.559587    7364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-458400
	I1210 07:07:56.613163    7364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55048 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-458400\id_rsa Username:docker}
	I1210 07:07:56.739866    7364 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:07:56.748541    7364 fix.go:56] duration metric: took 6.0687429s for fixHost
	I1210 07:07:56.748541    7364 start.go:83] releasing machines lock for "kubernetes-upgrade-458400", held for 6.0687429s
	I1210 07:07:56.752165    7364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-458400
	I1210 07:07:56.808763    7364 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 07:07:56.813357    7364 ssh_runner.go:195] Run: cat /version.json
	I1210 07:07:56.813402    7364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-458400
	I1210 07:07:56.816447    7364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-458400
	I1210 07:07:56.866697    7364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55048 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-458400\id_rsa Username:docker}
	I1210 07:07:56.867695    7364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55048 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-458400\id_rsa Username:docker}
	W1210 07:07:56.983722    7364 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 07:07:56.988325    7364 ssh_runner.go:195] Run: systemctl --version
	I1210 07:07:57.003874    7364 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:07:57.013163    7364 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:07:57.017991    7364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:07:57.033649    7364 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:07:57.033649    7364 start.go:496] detecting cgroup driver to use...
	I1210 07:07:57.033649    7364 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:07:57.033649    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:07:57.061669    7364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:07:57.081099    7364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1210 07:07:57.092101    7364 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 07:07:57.092101    7364 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 07:07:58.133779    7364 ssh_runner.go:235] Completed: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml": (1.0526433s)
	I1210 07:07:58.133822    7364 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:07:58.138494    7364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:07:58.188137    7364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:07:59.885772    7364 ssh_runner.go:235] Completed: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml": (1.6976097s)
	I1210 07:07:59.890223    7364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:07:59.912410    7364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:07:59.939101    7364 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:07:59.955550    7364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:08:00.042732    7364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:08:00.062145    7364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:08:00.082938    7364 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:08:00.100155    7364 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:08:00.118764    7364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:08:00.253699    7364 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:08:00.411897    7364 start.go:496] detecting cgroup driver to use...
	I1210 07:08:00.411897    7364 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:08:00.418498    7364 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 07:08:00.442070    7364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:08:00.466201    7364 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:08:00.775033    7364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:08:00.798728    7364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:08:00.816974    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:08:00.843057    7364 ssh_runner.go:195] Run: which cri-dockerd
	I1210 07:08:00.855890    7364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 07:08:01.012789    7364 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 07:08:01.042056    7364 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 07:08:01.195630    7364 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 07:08:01.327247    7364 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 07:08:01.327247    7364 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 07:08:01.353478    7364 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 07:08:01.375616    7364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:08:01.512537    7364 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 07:08:05.097107    7364 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.5845145s)
	I1210 07:08:05.109138    7364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:08:05.138583    7364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 07:08:05.162868    7364 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1210 07:08:05.189575    7364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:08:05.224418    7364 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 07:08:05.419754    7364 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 07:08:05.584932    7364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:08:05.752117    7364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 07:08:05.782009    7364 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 07:08:05.805273    7364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:08:05.949237    7364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 07:08:06.092892    7364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:08:06.119398    7364 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 07:08:06.125905    7364 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 07:08:06.134906    7364 start.go:564] Will wait 60s for crictl version
	I1210 07:08:06.140454    7364 ssh_runner.go:195] Run: which crictl
	I1210 07:08:06.161749    7364 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:08:06.226473    7364 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 07:08:06.231449    7364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:08:06.285015    7364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:08:06.345930    7364 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.2 ...
	I1210 07:08:06.350930    7364 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-458400 dig +short host.docker.internal
	I1210 07:08:06.494936    7364 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 07:08:06.500932    7364 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 07:08:06.507947    7364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:08:06.526966    7364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-458400
	I1210 07:08:06.589544    7364 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-458400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-458400 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:08:06.589544    7364 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 07:08:06.593542    7364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 07:08:06.626809    7364 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1210 07:08:06.626809    7364 docker.go:697] registry.k8s.io/kube-apiserver:v1.35.0-rc.1 wasn't preloaded
	I1210 07:08:06.630790    7364 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1210 07:08:06.646809    7364 ssh_runner.go:195] Run: which lz4
	I1210 07:08:06.658789    7364 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 07:08:06.666213    7364 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 07:08:06.666349    7364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (284645196 bytes)
	I1210 07:08:09.594440    7364 docker.go:655] duration metric: took 2.9396031s to copy over tarball
	I1210 07:08:09.598437    7364 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 07:08:11.866203    7364 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.266731s)
	I1210 07:08:11.866203    7364 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 07:08:11.882208    7364 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1210 07:08:11.894203    7364 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2652 bytes)
	I1210 07:08:11.916955    7364 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 07:08:11.938794    7364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:08:12.074856    7364 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 07:08:18.971730    7364 ssh_runner.go:235] Completed: sudo systemctl restart docker: (6.8967414s)
	I1210 07:08:18.976279    7364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 07:08:19.019662    7364 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1210 07:08:19.019715    7364 docker.go:697] registry.k8s.io/etcd:3.6.5-0 wasn't preloaded
	I1210 07:08:19.019745    7364 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 07:08:19.029704    7364 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:08:19.033712    7364 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:08:19.037700    7364 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:08:19.037700    7364 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 07:08:19.041698    7364 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:08:19.042702    7364 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:08:19.049705    7364 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:08:19.049705    7364 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 07:08:19.056704    7364 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:08:19.061699    7364 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:08:19.065703    7364 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:08:19.067701    7364 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:08:19.070709    7364 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:08:19.073702    7364 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:08:19.076702    7364 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:08:19.079702    7364 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	W1210 07:08:19.106698    7364 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:08:19.158720    7364 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:08:19.210289    7364 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:08:19.268542    7364 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:08:19.316173    7364 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:08:19.370172    7364 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:08:19.423176    7364 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:08:19.480164    7364 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 07:08:19.571674    7364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 07:08:19.573679    7364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 07:08:19.603681    7364 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 07:08:19.603681    7364 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:08:19.603681    7364 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:08:19.607678    7364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:08:19.607678    7364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:08:19.635813    7364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:08:19.640816    7364 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:08:19.640816    7364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:08:19.645828    7364 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:08:19.660812    7364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:08:19.675833    7364 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 07:08:19.675833    7364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 07:08:19.712820    7364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:08:19.794822    7364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:08:19.978817    7364 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:08:19.978817    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	I1210 07:08:21.927634    7364 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (1.9487877s)
	I1210 07:08:21.927634    7364 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1210 07:08:21.927634    7364 cache_images.go:125] Successfully loaded all cached images
	I1210 07:08:21.927634    7364 cache_images.go:94] duration metric: took 2.9078448s to LoadCachedImages
	I1210 07:08:21.927634    7364 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 docker true true} ...
	I1210 07:08:21.928163    7364 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-458400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-458400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:08:21.932139    7364 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 07:08:22.012148    7364 cni.go:84] Creating CNI manager for ""
	I1210 07:08:22.012148    7364 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 07:08:22.012148    7364 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:08:22.012148    7364 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-458400 NodeName:kubernetes-upgrade-458400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:08:22.013158    7364 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-458400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:08:22.017139    7364 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 07:08:22.033146    7364 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:08:22.037141    7364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:08:22.050145    7364 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1210 07:08:22.070141    7364 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 07:08:22.090739    7364 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1210 07:08:22.114738    7364 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:08:22.122747    7364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:08:22.142735    7364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:08:22.286107    7364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:08:22.309104    7364 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-458400 for IP: 192.168.76.2
	I1210 07:08:22.309104    7364 certs.go:195] generating shared ca certs ...
	I1210 07:08:22.309104    7364 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:08:22.309104    7364 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 07:08:22.310109    7364 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 07:08:22.310109    7364 certs.go:257] generating profile certs ...
	I1210 07:08:22.310109    7364 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-458400\client.key
	I1210 07:08:22.310109    7364 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-458400\apiserver.key.7cfa3de3
	I1210 07:08:22.311111    7364 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-458400\proxy-client.key
	I1210 07:08:22.312114    7364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 07:08:22.312114    7364 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 07:08:22.312114    7364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 07:08:22.312114    7364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 07:08:22.312114    7364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 07:08:22.313104    7364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 07:08:22.313104    7364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 07:08:22.314114    7364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:08:22.341123    7364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:08:22.369107    7364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:08:22.396950    7364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:08:22.431820    7364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-458400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1210 07:08:22.466489    7364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-458400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:08:22.505835    7364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-458400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:08:22.534828    7364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-458400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:08:22.565247    7364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:08:22.594161    7364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 07:08:22.620734    7364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 07:08:22.648762    7364 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:08:22.672733    7364 ssh_runner.go:195] Run: openssl version
	I1210 07:08:22.686731    7364 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 07:08:22.706990    7364 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 07:08:22.727494    7364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 07:08:22.734649    7364 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 07:08:22.738659    7364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 07:08:22.788666    7364 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:08:22.814127    7364 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 07:08:22.834943    7364 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 07:08:22.853819    7364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 07:08:22.860813    7364 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 07:08:22.865816    7364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 07:08:22.914217    7364 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:08:22.933220    7364 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:08:22.948208    7364 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:08:22.965216    7364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:08:22.972217    7364 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:08:22.976215    7364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:08:23.026220    7364 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:08:23.043213    7364 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:08:23.055212    7364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:08:23.116413    7364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:08:23.166449    7364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:08:23.221664    7364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:08:23.294827    7364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:08:23.348065    7364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:08:23.395916    7364 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-458400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-458400 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:08:23.400096    7364 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 07:08:23.436825    7364 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:08:23.449825    7364 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:08:23.449825    7364 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:08:23.453826    7364 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:08:23.465819    7364 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:08:23.468826    7364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-458400
	I1210 07:08:23.522835    7364 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-458400" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:08:23.523827    7364 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-458400" cluster setting kubeconfig missing "kubernetes-upgrade-458400" context setting]
	I1210 07:08:23.524842    7364 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:08:23.541829    7364 kapi.go:59] client config for kubernetes-upgrade-458400: &rest.Config{Host:"https://127.0.0.1:55052", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-458400/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-458400/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyD
ata:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff61ff39080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 07:08:23.542847    7364 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 07:08:23.542847    7364 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 07:08:23.542847    7364 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 07:08:23.542847    7364 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 07:08:23.542847    7364 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 07:08:23.546837    7364 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:08:23.560828    7364 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 07:07:25.109141959 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 07:08:22.102299734 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "kubernetes-upgrade-458400"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-rc.1
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1210 07:08:23.560828    7364 kubeadm.go:1161] stopping kube-system containers ...
	I1210 07:08:23.564825    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 07:08:23.596407    7364 docker.go:484] Stopping containers: [67d9fa01b7aa 5979722c7c1d b07ccc28ebf8 c742f3ab058c 065c80129bf3 1efaedddf7cc cb704121a5e7 076edefd2ec2]
	I1210 07:08:23.600413    7364 ssh_runner.go:195] Run: docker stop 67d9fa01b7aa 5979722c7c1d b07ccc28ebf8 c742f3ab058c 065c80129bf3 1efaedddf7cc cb704121a5e7 076edefd2ec2
	I1210 07:08:23.640692    7364 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 07:08:23.665128    7364 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:08:23.679140    7364 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 10 07:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec 10 07:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 10 07:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec 10 07:07 /etc/kubernetes/scheduler.conf
	
	I1210 07:08:23.684128    7364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:08:23.702139    7364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:08:23.719130    7364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:08:23.731137    7364 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:08:23.735131    7364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:08:23.750143    7364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:08:23.763149    7364 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:08:23.766131    7364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:08:23.782132    7364 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:08:23.799130    7364 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:08:23.868960    7364 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:08:24.396929    7364 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:08:24.636763    7364 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:08:24.709314    7364 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:08:24.775889    7364 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:08:24.780891    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:25.280371    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:25.781024    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:26.280490    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:26.784300    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:27.280796    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:27.780155    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:28.281272    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:28.780222    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:29.281327    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:29.782576    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:30.282112    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:30.784240    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:31.282476    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:31.781124    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:32.282116    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:32.783897    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:33.280952    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:33.782029    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:34.280533    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:34.779152    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:35.281849    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:35.783263    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:36.282683    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:36.781368    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:37.282929    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:37.781129    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:38.279538    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:38.781783    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:39.283151    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:39.779597    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:40.283281    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:40.782559    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:41.279461    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:41.806187    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:42.328726    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:42.801297    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:43.301086    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:43.787427    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:44.286948    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:44.785761    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:45.281198    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:45.781604    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:46.296893    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:46.781858    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:47.281190    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:47.782499    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:48.280120    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:48.780081    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:49.281648    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:49.780161    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:50.280757    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:50.780873    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:51.281959    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:51.781924    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:52.282149    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:52.782182    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:53.282290    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:53.781132    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:54.279504    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:54.783292    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:55.281608    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:55.780456    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:56.282455    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:56.781118    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:57.281981    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:57.781073    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:58.281464    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:58.783154    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:59.283242    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:08:59.783072    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:00.281184    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:00.781959    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:01.282780    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:01.782600    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:02.282652    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:02.781628    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:03.280970    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:03.780994    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:04.281946    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:04.782407    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:05.282396    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:05.781570    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:06.280357    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:06.783924    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:07.282207    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:07.781753    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:08.281716    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:08.780709    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:09.280775    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:09.781528    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:10.281452    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:10.781336    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:11.280854    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:11.781523    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:12.281810    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:12.780725    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:13.281990    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:13.782310    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:14.281787    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:14.781935    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:15.281761    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:15.781203    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:16.282535    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:16.782020    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:17.281476    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:17.781281    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:18.281281    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:18.780633    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:19.282465    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:19.780673    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:20.281495    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:20.781368    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:21.280852    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:21.782955    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:22.281246    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:22.782258    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:23.282056    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:23.781078    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:24.283908    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:24.780609    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:09:24.812840    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:09:24.817745    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:09:24.848889    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:09:24.854343    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:09:24.892692    7364 logs.go:282] 0 containers: []
	W1210 07:09:24.892692    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:09:24.897031    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:09:24.927691    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:09:24.932395    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:09:24.970179    7364 logs.go:282] 0 containers: []
	W1210 07:09:24.971161    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:24.974159    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:09:25.007408    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:09:25.010397    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:09:25.040793    7364 logs.go:282] 0 containers: []
	W1210 07:09:25.040793    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:25.044990    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:09:25.082701    7364 logs.go:282] 0 containers: []
	W1210 07:09:25.082701    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:09:25.082701    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:25.082701    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:25.122690    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:25.122690    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:25.205795    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:25.205795    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:09:25.205795    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:09:25.255389    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:09:25.255389    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:09:25.293921    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:09:25.293921    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:09:25.324903    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:09:25.324903    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:25.405031    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:25.405031    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:25.470392    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:09:25.470500    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:09:25.515743    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:09:25.515743    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:09:28.071675    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:28.094923    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:09:28.128509    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:09:28.131945    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:09:28.162556    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:09:28.165556    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:09:28.194131    7364 logs.go:282] 0 containers: []
	W1210 07:09:28.194131    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:09:28.197131    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:09:28.225216    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:09:28.229400    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:09:28.258889    7364 logs.go:282] 0 containers: []
	W1210 07:09:28.258889    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:28.261899    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:09:28.294638    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:09:28.299746    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:09:28.327300    7364 logs.go:282] 0 containers: []
	W1210 07:09:28.327300    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:28.330849    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:09:28.361707    7364 logs.go:282] 0 containers: []
	W1210 07:09:28.361707    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:09:28.361707    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:09:28.361707    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:09:28.410366    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:09:28.410436    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:09:28.451862    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:28.451862    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:28.528913    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:28.528913    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:28.618777    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:28.618777    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:09:28.618777    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:28.674263    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:28.674263    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:28.713404    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:09:28.713404    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:09:28.766411    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:09:28.766411    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:09:28.814023    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:09:28.814023    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:09:31.363538    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:31.387131    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:09:31.422530    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:09:31.427542    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:09:31.466082    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:09:31.470090    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:09:31.502401    7364 logs.go:282] 0 containers: []
	W1210 07:09:31.502401    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:09:31.505405    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:09:31.534402    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:09:31.537400    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:09:31.571653    7364 logs.go:282] 0 containers: []
	W1210 07:09:31.571653    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:31.574978    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:09:31.606980    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:09:31.609974    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:09:31.637968    7364 logs.go:282] 0 containers: []
	W1210 07:09:31.637968    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:31.640968    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:09:31.673156    7364 logs.go:282] 0 containers: []
	W1210 07:09:31.673245    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:09:31.673311    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:09:31.673311    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:09:31.715662    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:09:31.715662    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:09:31.762453    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:09:31.762453    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:09:31.805492    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:09:31.805544    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:09:31.839594    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:09:31.839594    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:31.894712    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:31.894712    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:31.968749    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:09:31.968749    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:09:32.017362    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:32.017362    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:32.059142    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:32.059142    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:32.144604    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:34.651963    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:34.678688    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:09:34.715817    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:09:34.721824    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:09:34.754455    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:09:34.759473    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:09:34.791576    7364 logs.go:282] 0 containers: []
	W1210 07:09:34.791576    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:09:34.795430    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:09:34.826887    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:09:34.830207    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:09:34.855310    7364 logs.go:282] 0 containers: []
	W1210 07:09:34.855386    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:34.860178    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:09:34.891151    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:09:34.894652    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:09:34.929402    7364 logs.go:282] 0 containers: []
	W1210 07:09:34.929456    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:34.933305    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:09:34.962419    7364 logs.go:282] 0 containers: []
	W1210 07:09:34.962457    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:09:34.962524    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:34.962524    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:35.047714    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:35.047808    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:09:35.047854    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:09:35.100972    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:09:35.101035    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:09:35.143721    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:09:35.143721    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:35.195830    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:35.195830    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:35.268609    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:35.268609    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:35.309603    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:09:35.309603    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:09:35.362175    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:09:35.362175    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:09:35.406201    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:09:35.406201    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:09:37.945872    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:37.968894    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:09:38.005814    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:09:38.011129    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:09:38.042762    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:09:38.046328    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:09:38.078206    7364 logs.go:282] 0 containers: []
	W1210 07:09:38.078206    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:09:38.082117    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:09:38.117812    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:09:38.121445    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:09:38.151680    7364 logs.go:282] 0 containers: []
	W1210 07:09:38.151680    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:38.155861    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:09:38.188090    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:09:38.192083    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:09:38.224632    7364 logs.go:282] 0 containers: []
	W1210 07:09:38.224632    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:38.229723    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:09:38.261829    7364 logs.go:282] 0 containers: []
	W1210 07:09:38.261829    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:09:38.261829    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:09:38.261829    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:38.315812    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:38.315887    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:38.355855    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:38.355855    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:38.446642    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:38.446642    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:09:38.446642    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:09:38.494647    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:09:38.494647    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:09:38.541236    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:09:38.541236    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:09:38.574241    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:38.574241    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:38.637265    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:09:38.637265    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:09:38.684587    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:09:38.685590    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:09:41.231595    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:41.259116    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:09:41.293638    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:09:41.297847    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:09:41.329568    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:09:41.335612    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:09:41.364269    7364 logs.go:282] 0 containers: []
	W1210 07:09:41.364269    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:09:41.368480    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:09:41.399707    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:09:41.402909    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:09:41.431657    7364 logs.go:282] 0 containers: []
	W1210 07:09:41.431657    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:41.437419    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:09:41.467466    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:09:41.471555    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:09:41.502190    7364 logs.go:282] 0 containers: []
	W1210 07:09:41.502190    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:41.505927    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:09:41.536791    7364 logs.go:282] 0 containers: []
	W1210 07:09:41.536848    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:09:41.536879    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:41.536911    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:41.601120    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:09:41.602123    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:09:41.653977    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:09:41.653977    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:09:41.692389    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:09:41.692389    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:41.745649    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:41.745649    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:41.783309    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:41.783309    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:41.877947    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:41.877947    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:09:41.877947    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:09:41.931963    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:09:41.931983    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:09:41.989663    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:09:41.989663    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:09:44.558578    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:44.585189    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:09:44.618424    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:09:44.621399    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:09:44.652441    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:09:44.658670    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:09:44.693683    7364 logs.go:282] 0 containers: []
	W1210 07:09:44.693757    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:09:44.698207    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:09:44.730737    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:09:44.734663    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:09:44.770324    7364 logs.go:282] 0 containers: []
	W1210 07:09:44.770324    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:44.774325    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:09:44.804318    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:09:44.807321    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:09:44.835068    7364 logs.go:282] 0 containers: []
	W1210 07:09:44.835068    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:44.839377    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:09:44.873786    7364 logs.go:282] 0 containers: []
	W1210 07:09:44.873786    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:09:44.873786    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:44.873786    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:44.942588    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:44.943589    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:45.063487    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:45.063487    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:09:45.063487    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:09:45.110120    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:09:45.110120    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:09:45.153145    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:09:45.153145    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:45.214913    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:45.214913    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:45.257316    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:09:45.257316    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:09:45.309395    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:09:45.310395    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:09:45.359683    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:09:45.359683    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:09:47.901318    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:47.924463    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:09:47.955643    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:09:47.959924    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:09:47.995820    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:09:48.001087    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:09:48.054119    7364 logs.go:282] 0 containers: []
	W1210 07:09:48.054182    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:09:48.058069    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:09:48.087028    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:09:48.091156    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:09:48.120048    7364 logs.go:282] 0 containers: []
	W1210 07:09:48.120077    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:48.123876    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:09:48.157114    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:09:48.160825    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:09:48.190218    7364 logs.go:282] 0 containers: []
	W1210 07:09:48.190266    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:48.194887    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:09:48.223271    7364 logs.go:282] 0 containers: []
	W1210 07:09:48.223340    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:09:48.223340    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:09:48.223340    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:09:48.269071    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:09:48.269071    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:09:48.317969    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:48.317969    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:48.356935    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:48.356935    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:48.444817    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:48.444817    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:09:48.444817    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:09:48.486302    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:09:48.486302    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:09:48.524673    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:09:48.524673    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:09:48.559442    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:09:48.559442    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:48.613064    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:48.613064    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:51.186994    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:51.212854    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:09:51.248165    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:09:51.251630    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:09:51.281455    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:09:51.284753    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:09:51.318132    7364 logs.go:282] 0 containers: []
	W1210 07:09:51.318132    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:09:51.321873    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:09:51.351247    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:09:51.354788    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:09:51.390443    7364 logs.go:282] 0 containers: []
	W1210 07:09:51.390443    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:51.393631    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:09:51.425499    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:09:51.428492    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:09:51.459604    7364 logs.go:282] 0 containers: []
	W1210 07:09:51.459604    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:51.462681    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:09:51.494791    7364 logs.go:282] 0 containers: []
	W1210 07:09:51.494791    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:09:51.494791    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:51.494791    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:51.557078    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:51.557078    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:51.595897    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:09:51.595897    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:09:51.643104    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:09:51.643104    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:09:51.686826    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:09:51.686826    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:09:51.723223    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:09:51.723223    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:51.779306    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:51.779306    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:51.854772    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:51.854772    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:09:51.854772    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:09:51.900473    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:09:51.900532    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:09:54.441239    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:54.462621    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:09:54.501335    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:09:54.504936    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:09:54.536172    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:09:54.539312    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:09:54.568817    7364 logs.go:282] 0 containers: []
	W1210 07:09:54.568817    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:09:54.572868    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:09:54.607109    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:09:54.610343    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:09:54.639146    7364 logs.go:282] 0 containers: []
	W1210 07:09:54.639146    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:54.642998    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:09:54.674885    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:09:54.678080    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:09:54.712905    7364 logs.go:282] 0 containers: []
	W1210 07:09:54.712905    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:54.716884    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:09:54.750061    7364 logs.go:282] 0 containers: []
	W1210 07:09:54.750061    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:09:54.750061    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:54.750061    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:09:54.817133    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:54.817133    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:54.893776    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:54.893776    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:09:54.893776    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:09:54.946147    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:09:54.946147    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:09:54.992848    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:09:54.992848    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:09:55.027526    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:09:55.027586    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:09:55.059808    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:55.059808    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:55.098515    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:09:55.098515    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:09:55.150417    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:09:55.150417    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:57.705809    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:09:57.729071    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:09:57.765280    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:09:57.770362    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:09:57.805776    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:09:57.809410    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:09:57.841634    7364 logs.go:282] 0 containers: []
	W1210 07:09:57.841634    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:09:57.845439    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:09:57.876674    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:09:57.879670    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:09:57.911551    7364 logs.go:282] 0 containers: []
	W1210 07:09:57.911551    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:09:57.917158    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:09:57.946563    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:09:57.950560    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:09:57.980683    7364 logs.go:282] 0 containers: []
	W1210 07:09:57.980683    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:09:57.984729    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:09:58.015891    7364 logs.go:282] 0 containers: []
	W1210 07:09:58.015950    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:09:58.015950    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:09:58.015996    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:09:58.053785    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:09:58.053785    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:09:58.136709    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:09:58.136709    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:09:58.136709    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:09:58.178859    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:09:58.178859    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:09:58.240988    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:09:58.241508    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:09:58.291046    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:09:58.291046    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:09:58.338195    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:09:58.338195    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:09:58.375995    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:09:58.375995    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:09:58.412399    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:09:58.412399    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:00.984700    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:01.007879    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:10:01.042754    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:10:01.046968    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:10:01.077008    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:10:01.080487    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:10:01.112020    7364 logs.go:282] 0 containers: []
	W1210 07:10:01.112088    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:10:01.115761    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:10:01.152114    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:10:01.155822    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:10:01.187044    7364 logs.go:282] 0 containers: []
	W1210 07:10:01.187044    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:01.191101    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:10:01.222530    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:10:01.225671    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:10:01.254573    7364 logs.go:282] 0 containers: []
	W1210 07:10:01.254599    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:01.258462    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:10:01.292090    7364 logs.go:282] 0 containers: []
	W1210 07:10:01.292090    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:10:01.292090    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:01.292090    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:01.372822    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:01.372822    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:10:01.372822    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:10:01.411118    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:10:01.411118    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:01.460339    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:01.460407    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:01.528400    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:01.528400    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:01.568405    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:10:01.568405    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:10:01.621722    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:10:01.621767    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:10:01.664253    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:10:01.664253    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:10:01.716698    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:10:01.716698    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:10:04.264547    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:04.291467    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:10:04.325791    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:10:04.328655    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:10:04.361755    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:10:04.365270    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:10:04.394057    7364 logs.go:282] 0 containers: []
	W1210 07:10:04.394085    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:10:04.398224    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:10:04.428994    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:10:04.432313    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:10:04.464335    7364 logs.go:282] 0 containers: []
	W1210 07:10:04.464408    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:04.468241    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:10:04.499805    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:10:04.503853    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:10:04.549390    7364 logs.go:282] 0 containers: []
	W1210 07:10:04.549462    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:04.554657    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:10:04.582071    7364 logs.go:282] 0 containers: []
	W1210 07:10:04.582118    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:10:04.582118    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:10:04.582118    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:04.636286    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:04.636368    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:04.700437    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:10:04.700437    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:10:04.743119    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:10:04.743119    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:10:04.785528    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:10:04.785528    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:10:04.826498    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:10:04.826498    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:10:04.859738    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:04.859738    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:04.896339    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:04.897342    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:04.974330    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:04.974330    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:10:04.974330    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:10:07.526050    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:07.549025    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:10:07.580214    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:10:07.584200    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:10:07.615448    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:10:07.619243    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:10:07.648949    7364 logs.go:282] 0 containers: []
	W1210 07:10:07.648949    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:10:07.652243    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:10:07.684670    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:10:07.688319    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:10:07.719412    7364 logs.go:282] 0 containers: []
	W1210 07:10:07.719412    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:07.722954    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:10:07.751515    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:10:07.755704    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:10:07.783112    7364 logs.go:282] 0 containers: []
	W1210 07:10:07.783112    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:07.786793    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:10:07.817395    7364 logs.go:282] 0 containers: []
	W1210 07:10:07.817395    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:10:07.817395    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:10:07.817395    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:10:07.862356    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:07.862356    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:07.929105    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:07.929105    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:07.969072    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:10:07.969072    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:10:08.010598    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:10:08.010659    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:10:08.047042    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:10:08.047042    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:10:08.080023    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:10:08.080112    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:08.134807    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:08.134869    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:08.215488    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:08.215488    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:10:08.215488    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:10:10.775710    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:10.800307    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:10:10.831314    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:10:10.835680    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:10:10.866119    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:10:10.869872    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:10:10.902435    7364 logs.go:282] 0 containers: []
	W1210 07:10:10.902435    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:10:10.906574    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:10:10.941700    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:10:10.945184    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:10:10.977445    7364 logs.go:282] 0 containers: []
	W1210 07:10:10.977516    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:10.981069    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:10:11.018707    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:10:11.022084    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:10:11.054124    7364 logs.go:282] 0 containers: []
	W1210 07:10:11.054124    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:11.059861    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:10:11.094677    7364 logs.go:282] 0 containers: []
	W1210 07:10:11.094746    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:10:11.094801    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:10:11.094801    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:10:11.141579    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:10:11.141579    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:10:11.179222    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:10:11.179222    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:10:11.219008    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:10:11.219008    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:11.286276    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:11.287277    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:11.353856    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:10:11.353856    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:10:11.405550    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:11.405550    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:11.469482    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:11.469482    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:11.546480    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:11.546480    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:10:11.546480    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:10:14.106386    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:14.191521    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:10:14.257606    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:10:14.266096    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:10:14.326209    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:10:14.333192    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:10:14.377184    7364 logs.go:282] 0 containers: []
	W1210 07:10:14.377184    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:10:14.382198    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:10:14.426099    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:10:14.430697    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:10:14.473241    7364 logs.go:282] 0 containers: []
	W1210 07:10:14.473241    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:14.479227    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:10:14.531245    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:10:14.536235    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:10:14.580252    7364 logs.go:282] 0 containers: []
	W1210 07:10:14.580252    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:14.585243    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:10:14.628244    7364 logs.go:282] 0 containers: []
	W1210 07:10:14.628244    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:10:14.628244    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:10:14.628244    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:10:14.677251    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:14.677251    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:14.737505    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:10:14.737505    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:10:14.809509    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:10:14.809509    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:10:14.896474    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:10:14.896474    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:10:14.958906    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:10:14.958906    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:15.034042    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:15.034089    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:15.134940    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:15.134940    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:15.251011    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:15.251011    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:10:15.251011    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:10:17.823750    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:17.841715    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:10:17.872414    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:10:17.876363    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:10:17.906448    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:10:17.913061    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:10:17.947837    7364 logs.go:282] 0 containers: []
	W1210 07:10:17.947837    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:10:17.951164    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:10:17.989417    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:10:17.992208    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:10:18.023808    7364 logs.go:282] 0 containers: []
	W1210 07:10:18.023808    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:18.026802    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:10:18.067831    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:10:18.071741    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:10:18.102660    7364 logs.go:282] 0 containers: []
	W1210 07:10:18.102660    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:18.108636    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:10:18.141058    7364 logs.go:282] 0 containers: []
	W1210 07:10:18.141058    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:10:18.141058    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:18.141058    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:18.216553    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:18.216630    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:10:18.216630    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:10:18.258358    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:10:18.258358    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:10:18.301622    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:10:18.301622    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:10:18.338612    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:10:18.338612    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:18.395110    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:18.395110    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:18.459290    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:18.459290    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:18.497305    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:10:18.498306    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:10:18.550020    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:10:18.550020    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:10:21.087972    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:21.111815    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:10:21.149491    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:10:21.153682    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:10:21.187297    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:10:21.193265    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:10:21.225197    7364 logs.go:282] 0 containers: []
	W1210 07:10:21.225197    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:10:21.228811    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:10:21.260626    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:10:21.265154    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:10:21.295691    7364 logs.go:282] 0 containers: []
	W1210 07:10:21.295691    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:21.299651    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:10:21.332166    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:10:21.335436    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:10:21.363595    7364 logs.go:282] 0 containers: []
	W1210 07:10:21.363595    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:21.367715    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:10:21.396642    7364 logs.go:282] 0 containers: []
	W1210 07:10:21.396642    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:10:21.396642    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:21.396642    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:21.461279    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:21.461279    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:21.549733    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:21.549733    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:10:21.549733    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:10:21.589791    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:10:21.589791    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:21.636683    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:21.636789    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:21.674379    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:10:21.674379    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:10:21.727626    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:10:21.727626    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:10:21.774254    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:10:21.774254    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:10:21.822432    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:10:21.822432    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:10:24.368022    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:24.389997    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:10:24.420835    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:10:24.424364    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:10:24.457735    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:10:24.461165    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:10:24.493995    7364 logs.go:282] 0 containers: []
	W1210 07:10:24.493995    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:10:24.497602    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:10:24.526404    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:10:24.530434    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:10:24.558371    7364 logs.go:282] 0 containers: []
	W1210 07:10:24.558371    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:24.563025    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:10:24.596215    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:10:24.600537    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:10:24.630188    7364 logs.go:282] 0 containers: []
	W1210 07:10:24.630188    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:24.633620    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:10:24.662933    7364 logs.go:282] 0 containers: []
	W1210 07:10:24.662933    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:10:24.662933    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:10:24.662933    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:10:24.711236    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:10:24.711236    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:10:24.756591    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:10:24.756591    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:10:24.792239    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:10:24.792239    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:24.841058    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:24.841058    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:24.878554    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:10:24.878554    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:10:24.922089    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:10:24.922089    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:10:24.956461    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:24.956461    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:25.022481    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:25.023485    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:25.105261    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:27.610682    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:27.635236    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:10:27.665265    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:10:27.669985    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:10:27.700872    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:10:27.704964    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:10:27.737666    7364 logs.go:282] 0 containers: []
	W1210 07:10:27.737666    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:10:27.741448    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:10:27.771798    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:10:27.775801    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:10:27.805265    7364 logs.go:282] 0 containers: []
	W1210 07:10:27.805265    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:27.808862    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:10:27.839278    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:10:27.842988    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:10:27.874412    7364 logs.go:282] 0 containers: []
	W1210 07:10:27.874437    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:27.878160    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:10:27.910681    7364 logs.go:282] 0 containers: []
	W1210 07:10:27.910736    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:10:27.910756    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:10:27.910756    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:10:27.947721    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:27.947721    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:28.013962    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:28.013962    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:28.055084    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:10:28.055084    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:10:28.103490    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:10:28.103490    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:10:28.149848    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:10:28.149848    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:10:28.198228    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:10:28.198228    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:28.265345    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:28.265345    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:28.347252    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:28.347782    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:10:28.347782    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:10:30.894846    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:30.913968    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:10:30.951439    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:10:30.955562    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:10:30.992974    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:10:30.996727    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:10:31.035143    7364 logs.go:282] 0 containers: []
	W1210 07:10:31.035143    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:10:31.038785    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:10:31.072885    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:10:31.076281    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:10:31.108273    7364 logs.go:282] 0 containers: []
	W1210 07:10:31.108322    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:31.114224    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:10:31.151523    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:10:31.156090    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:10:31.185827    7364 logs.go:282] 0 containers: []
	W1210 07:10:31.185925    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:31.190104    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:10:31.226565    7364 logs.go:282] 0 containers: []
	W1210 07:10:31.226565    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:10:31.226565    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:10:31.226565    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:10:31.275367    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:10:31.275367    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:10:31.312460    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:10:31.312460    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:31.371031    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:31.371071    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:31.437450    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:31.437450    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:31.482632    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:31.482632    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:31.594263    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:31.594263    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:10:31.594263    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:10:31.641177    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:10:31.641177    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:10:31.688176    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:10:31.688176    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:10:34.229062    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:34.251701    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:10:34.289319    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:10:34.292331    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:10:34.333079    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:10:34.336069    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:10:34.371645    7364 logs.go:282] 0 containers: []
	W1210 07:10:34.371645    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:10:34.374632    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:10:34.407670    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:10:34.410642    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:10:34.442377    7364 logs.go:282] 0 containers: []
	W1210 07:10:34.442377    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:34.445365    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:10:34.476381    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:10:34.479377    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:10:34.509382    7364 logs.go:282] 0 containers: []
	W1210 07:10:34.509382    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:34.512397    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:10:34.549420    7364 logs.go:282] 0 containers: []
	W1210 07:10:34.549420    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:10:34.549420    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:34.549420    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:34.586672    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:10:34.586672    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:10:34.632203    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:10:34.632795    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:10:34.681973    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:10:34.681973    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:10:34.719008    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:10:34.719008    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:34.776945    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:34.777955    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:34.862296    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:34.862296    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:10:34.862296    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:10:34.904293    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:10:34.904293    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:10:34.942305    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:34.942305    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:37.513828    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:37.532827    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:10:37.559828    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:10:37.562822    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:10:37.599332    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:10:37.603368    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:10:37.633955    7364 logs.go:282] 0 containers: []
	W1210 07:10:37.633955    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:10:37.636950    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:10:37.671179    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:10:37.675171    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:10:37.707582    7364 logs.go:282] 0 containers: []
	W1210 07:10:37.707582    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:37.712697    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:10:37.741935    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:10:37.747197    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:10:37.780247    7364 logs.go:282] 0 containers: []
	W1210 07:10:37.780247    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:37.784231    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:10:37.809234    7364 logs.go:282] 0 containers: []
	W1210 07:10:37.809234    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:10:37.809234    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:10:37.810230    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:10:37.854252    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:10:37.854252    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:10:37.892249    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:10:37.892249    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:37.937379    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:37.937379    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:38.006974    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:38.006974    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:38.048638    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:38.048638    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:38.140864    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:38.140864    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:10:38.140864    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:10:38.192459    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:10:38.192459    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:10:38.243055    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:10:38.243055    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:10:40.793744    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:40.816146    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:10:40.852418    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:10:40.856910    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:10:40.889756    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:10:40.893381    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:10:40.922486    7364 logs.go:282] 0 containers: []
	W1210 07:10:40.922486    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:10:40.926836    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:10:40.960831    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:10:40.965916    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:10:40.994670    7364 logs.go:282] 0 containers: []
	W1210 07:10:40.994670    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:40.998625    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:10:41.028266    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:10:41.031584    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:10:41.062302    7364 logs.go:282] 0 containers: []
	W1210 07:10:41.062302    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:41.067120    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:10:41.100115    7364 logs.go:282] 0 containers: []
	W1210 07:10:41.100115    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:10:41.100115    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:10:41.100115    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:10:41.141833    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:10:41.141833    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:10:41.181990    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:41.181990    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:41.252443    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:41.253438    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:41.347415    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:41.347415    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:10:41.347415    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:10:41.396634    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:10:41.396634    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:10:41.444117    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:10:41.444117    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:41.505927    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:41.505927    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:41.547044    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:10:41.547044    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:10:44.097337    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:44.117435    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:10:44.151334    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:10:44.154448    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:10:44.188489    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:10:44.192500    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:10:44.218990    7364 logs.go:282] 0 containers: []
	W1210 07:10:44.218990    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:10:44.222434    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:10:44.257316    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:10:44.261442    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:10:44.288240    7364 logs.go:282] 0 containers: []
	W1210 07:10:44.288240    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:44.292779    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:10:44.324837    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:10:44.327829    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:10:44.359832    7364 logs.go:282] 0 containers: []
	W1210 07:10:44.359832    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:44.362834    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:10:44.391838    7364 logs.go:282] 0 containers: []
	W1210 07:10:44.391838    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:10:44.391838    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:10:44.391838    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:10:44.424500    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:44.424500    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:44.462890    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:10:44.462890    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:10:44.517565    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:10:44.517565    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:10:44.557571    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:10:44.557571    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:44.608750    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:44.608750    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:44.677776    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:44.677776    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:44.758683    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:44.758683    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:10:44.758683    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:10:44.799814    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:10:44.800817    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:10:47.347526    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:47.372427    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:10:47.414917    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:10:47.419299    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:10:47.456213    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:10:47.460212    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:10:47.491241    7364 logs.go:282] 0 containers: []
	W1210 07:10:47.491241    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:10:47.495220    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:10:47.551789    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:10:47.556673    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:10:47.588837    7364 logs.go:282] 0 containers: []
	W1210 07:10:47.588872    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:47.593974    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:10:47.631461    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:10:47.635299    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:10:47.665180    7364 logs.go:282] 0 containers: []
	W1210 07:10:47.665180    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:47.668194    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:10:47.701904    7364 logs.go:282] 0 containers: []
	W1210 07:10:47.701904    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:10:47.701904    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:47.701904    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:47.800029    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:47.800029    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:10:47.800569    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:10:47.854190    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:10:47.854190    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:10:47.903414    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:47.903414    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:47.978652    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:47.978652    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:48.023563    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:10:48.023563    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:10:48.068599    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:10:48.068599    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:10:48.116472    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:10:48.116472    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:10:48.147478    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:10:48.147478    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:50.707922    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:50.731313    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:10:50.762321    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:10:50.765329    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:10:50.795314    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:10:50.798319    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:10:50.830568    7364 logs.go:282] 0 containers: []
	W1210 07:10:50.830627    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:10:50.834535    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:10:50.867365    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:10:50.871866    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:10:50.909028    7364 logs.go:282] 0 containers: []
	W1210 07:10:50.909080    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:50.913718    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:10:50.953933    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:10:50.958788    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:10:50.995132    7364 logs.go:282] 0 containers: []
	W1210 07:10:50.995132    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:51.000049    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:10:51.034554    7364 logs.go:282] 0 containers: []
	W1210 07:10:51.034601    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:10:51.034601    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:51.034662    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:51.074617    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:10:51.074617    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:10:51.125149    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:10:51.125684    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:10:51.170349    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:10:51.170349    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:51.232647    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:51.232647    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:51.307659    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:51.307659    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:51.417160    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:51.417160    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:10:51.417239    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:10:51.467991    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:10:51.467991    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:10:51.506645    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:10:51.506645    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:10:54.051242    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:54.074703    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:10:54.112413    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:10:54.117031    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:10:54.149230    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:10:54.153190    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:10:54.182034    7364 logs.go:282] 0 containers: []
	W1210 07:10:54.182034    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:10:54.186016    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:10:54.223085    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:10:54.227676    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:10:54.263123    7364 logs.go:282] 0 containers: []
	W1210 07:10:54.263123    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:54.268658    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:10:54.305848    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:10:54.310538    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:10:54.341220    7364 logs.go:282] 0 containers: []
	W1210 07:10:54.341220    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:54.346648    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:10:54.383123    7364 logs.go:282] 0 containers: []
	W1210 07:10:54.383195    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:10:54.383195    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:54.383195    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:54.462698    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:54.462698    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:54.501843    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:10:54.501843    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:10:54.557501    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:10:54.557539    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:10:54.607460    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:10:54.607460    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:10:54.660264    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:10:54.660264    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:10:54.706998    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:54.707041    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:54.792989    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:54.792989    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:10:54.792989    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:10:54.828456    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:10:54.828456    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:57.392418    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:10:57.413505    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:10:57.449578    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:10:57.453684    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:10:57.487464    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:10:57.492353    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:10:57.521309    7364 logs.go:282] 0 containers: []
	W1210 07:10:57.521356    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:10:57.525106    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:10:57.558404    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:10:57.561991    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:10:57.594366    7364 logs.go:282] 0 containers: []
	W1210 07:10:57.594442    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:10:57.598372    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:10:57.631536    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:10:57.637058    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:10:57.671235    7364 logs.go:282] 0 containers: []
	W1210 07:10:57.671322    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:10:57.674347    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:10:57.702980    7364 logs.go:282] 0 containers: []
	W1210 07:10:57.702980    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:10:57.702980    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:10:57.702980    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:10:57.772826    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:10:57.772826    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:10:57.813049    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:10:57.813049    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:10:57.846056    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:10:57.846056    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:10:57.903524    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:10:57.903524    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:10:57.940529    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:10:57.941524    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:10:58.049967    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:10:58.049967    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:10:58.049967    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:10:58.097096    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:10:58.097096    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:10:58.140095    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:10:58.140095    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:11:00.681365    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:00.706194    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:11:00.740273    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:11:00.743790    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:11:00.771647    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:11:00.777317    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:11:00.805885    7364 logs.go:282] 0 containers: []
	W1210 07:11:00.805885    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:11:00.809301    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:11:00.840502    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:11:00.847058    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:11:00.886915    7364 logs.go:282] 0 containers: []
	W1210 07:11:00.886915    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:00.889914    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:11:00.917910    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:11:00.920923    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:11:00.954394    7364 logs.go:282] 0 containers: []
	W1210 07:11:00.954394    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:00.957906    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:11:00.991927    7364 logs.go:282] 0 containers: []
	W1210 07:11:00.991927    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:11:00.991927    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:11:00.991927    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:11:01.067924    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:11:01.067924    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:11:01.102914    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:01.102914    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:01.175922    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:01.175922    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:01.212909    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:11:01.212909    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:11:01.259913    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:11:01.259913    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:11:01.310912    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:11:01.310912    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:11:01.354910    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:11:01.354910    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:01.415153    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:01.415153    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:01.511732    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:04.017504    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:04.036492    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:11:04.067673    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:11:04.071915    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:11:04.101559    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:11:04.106063    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:11:04.135117    7364 logs.go:282] 0 containers: []
	W1210 07:11:04.135117    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:11:04.140660    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:11:04.169521    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:11:04.172517    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:11:04.203292    7364 logs.go:282] 0 containers: []
	W1210 07:11:04.203292    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:04.207409    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:11:04.238635    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:11:04.241642    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:11:04.272404    7364 logs.go:282] 0 containers: []
	W1210 07:11:04.272404    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:04.276352    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:11:04.306972    7364 logs.go:282] 0 containers: []
	W1210 07:11:04.306972    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:11:04.306972    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:04.306972    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:04.391890    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:04.391890    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:11:04.391890    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:11:04.438158    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:11:04.438158    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:04.549313    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:04.549406    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:04.637580    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:11:04.637580    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:11:04.686320    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:11:04.686411    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:11:04.745613    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:11:04.745613    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:11:04.787603    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:11:04.787603    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:11:04.822605    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:04.822605    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:07.919307    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:07.949119    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:11:07.987607    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:11:07.990614    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:11:08.024621    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:11:08.028621    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:11:08.059521    7364 logs.go:282] 0 containers: []
	W1210 07:11:08.059521    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:11:08.062520    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:11:08.093520    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:11:08.096517    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:11:08.132981    7364 logs.go:282] 0 containers: []
	W1210 07:11:08.132981    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:08.136848    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:11:08.168155    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:11:08.172286    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:11:08.205376    7364 logs.go:282] 0 containers: []
	W1210 07:11:08.205414    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:08.208764    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:11:08.244142    7364 logs.go:282] 0 containers: []
	W1210 07:11:08.244142    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:11:08.244142    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:08.244142    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:08.321504    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:08.321504    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:08.357509    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:11:08.357509    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:11:08.423184    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:11:08.423184    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:11:08.486733    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:11:08.486733    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:11:08.566727    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:11:08.566727    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:11:08.604727    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:11:08.604727    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:11:08.643741    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:11:08.643741    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:08.700738    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:08.700738    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:08.819747    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:11.326360    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:11.367102    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:11:11.441825    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:11:11.448904    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:11:11.539971    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:11:11.545978    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:11:11.673789    7364 logs.go:282] 0 containers: []
	W1210 07:11:11.673789    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:11:11.681117    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:11:11.780594    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:11:11.789069    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:11:11.870447    7364 logs.go:282] 0 containers: []
	W1210 07:11:11.870447    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:11.876442    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:11:11.970072    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:11:11.982839    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:11:12.050178    7364 logs.go:282] 0 containers: []
	W1210 07:11:12.051181    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:12.057182    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:11:12.105194    7364 logs.go:282] 0 containers: []
	W1210 07:11:12.106199    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:11:12.106199    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:11:12.106199    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:11:12.185185    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:11:12.185185    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:11:12.236186    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:11:12.236186    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:12.303183    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:12.303183    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:12.387185    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:12.387185    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:12.486179    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:12.486179    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:11:12.486179    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:11:12.531452    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:12.531452    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:12.570450    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:11:12.570450    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:11:12.618689    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:11:12.618689    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:11:15.170871    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:15.200071    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:11:15.233051    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:11:15.236048    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:11:15.271058    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:11:15.276064    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:11:15.313048    7364 logs.go:282] 0 containers: []
	W1210 07:11:15.313048    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:11:15.317059    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:11:15.350063    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:11:15.354054    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:11:15.388063    7364 logs.go:282] 0 containers: []
	W1210 07:11:15.388063    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:15.393043    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:11:15.429049    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:11:15.432048    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:11:15.465240    7364 logs.go:282] 0 containers: []
	W1210 07:11:15.465240    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:15.469212    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:11:15.499219    7364 logs.go:282] 0 containers: []
	W1210 07:11:15.499219    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:11:15.499219    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:15.499219    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:15.587230    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:15.587230    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:11:15.587230    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:11:15.639225    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:11:15.639225    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:11:15.684211    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:11:15.684211    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:11:15.730212    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:11:15.730212    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:11:15.770215    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:11:15.770215    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:15.830640    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:15.830640    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:15.909476    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:15.909476    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:15.951477    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:11:15.951477    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:11:18.497242    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:18.518162    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:11:18.557321    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:11:18.561482    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:11:18.597314    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:11:18.600887    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:11:18.632576    7364 logs.go:282] 0 containers: []
	W1210 07:11:18.632576    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:11:18.635564    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:11:18.666150    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:11:18.669155    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:11:18.699144    7364 logs.go:282] 0 containers: []
	W1210 07:11:18.699144    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:18.703143    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:11:18.734551    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:11:18.739528    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:11:18.770425    7364 logs.go:282] 0 containers: []
	W1210 07:11:18.770425    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:18.773413    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:11:18.808422    7364 logs.go:282] 0 containers: []
	W1210 07:11:18.808422    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:11:18.808422    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:18.808422    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:18.844413    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:18.845426    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:18.934805    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:18.934868    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:11:18.934933    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:11:18.983821    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:11:18.983821    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:11:19.057805    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:11:19.057805    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:11:19.097815    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:11:19.097815    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:19.151515    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:19.151515    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:19.215525    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:11:19.215525    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:11:19.261515    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:11:19.261515    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:11:21.801587    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:21.822135    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:11:21.855648    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:11:21.858941    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:11:21.894595    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:11:21.899742    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:11:21.934680    7364 logs.go:282] 0 containers: []
	W1210 07:11:21.934680    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:11:21.942553    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:11:21.972291    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:11:21.975298    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:11:22.008295    7364 logs.go:282] 0 containers: []
	W1210 07:11:22.008295    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:22.012292    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:11:22.050295    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:11:22.054296    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:11:22.085814    7364 logs.go:282] 0 containers: []
	W1210 07:11:22.085814    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:22.090306    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:11:22.118111    7364 logs.go:282] 0 containers: []
	W1210 07:11:22.118111    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:11:22.118111    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:22.118111    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:22.183770    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:22.183770    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:22.222808    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:11:22.222808    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:11:22.266790    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:11:22.266790    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:11:22.309805    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:11:22.309805    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:11:22.351812    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:11:22.351812    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:11:22.390795    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:22.390795    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:22.490429    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:22.490429    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:11:22.490429    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:11:22.540429    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:11:22.540429    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:25.105672    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:25.129185    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:11:25.162851    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:11:25.165845    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:11:25.194850    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:11:25.197839    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:11:25.226856    7364 logs.go:282] 0 containers: []
	W1210 07:11:25.226856    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:11:25.229846    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:11:25.258967    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:11:25.262745    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:11:25.306664    7364 logs.go:282] 0 containers: []
	W1210 07:11:25.306740    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:25.312797    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:11:25.347873    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:11:25.350868    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:11:25.383888    7364 logs.go:282] 0 containers: []
	W1210 07:11:25.383888    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:25.386865    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:11:25.419870    7364 logs.go:282] 0 containers: []
	W1210 07:11:25.419870    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:11:25.419870    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:11:25.419870    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:25.472017    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:25.472069    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:25.534907    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:25.534907    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:25.623108    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:25.623108    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:11:25.623108    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:11:25.676331    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:25.676331    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:25.715343    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:11:25.715343    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:11:25.764331    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:11:25.764331    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:11:25.813144    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:11:25.813144    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:11:25.853153    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:11:25.854146    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:11:28.401604    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:28.426110    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:11:28.458535    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:11:28.462150    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:11:28.496032    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:11:28.500042    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:11:28.526396    7364 logs.go:282] 0 containers: []
	W1210 07:11:28.526396    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:11:28.530202    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:11:28.560379    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:11:28.564295    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:11:28.596489    7364 logs.go:282] 0 containers: []
	W1210 07:11:28.596489    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:28.601331    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:11:28.630943    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:11:28.634737    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:11:28.665039    7364 logs.go:282] 0 containers: []
	W1210 07:11:28.665039    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:28.670209    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:11:28.704898    7364 logs.go:282] 0 containers: []
	W1210 07:11:28.704898    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:11:28.704972    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:28.704972    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:28.746931    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:11:28.746931    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:11:28.793990    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:11:28.793990    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:11:28.836005    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:11:28.836005    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:11:28.881345    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:11:28.881345    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:11:28.916662    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:11:28.916662    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:28.974824    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:28.974824    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:29.034509    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:29.035510    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:29.110980    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:29.110980    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:11:29.110980    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:11:31.664759    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:31.689811    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:11:31.725531    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:11:31.729527    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:11:31.761125    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:11:31.765127    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:11:31.793552    7364 logs.go:282] 0 containers: []
	W1210 07:11:31.793552    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:11:31.798232    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:11:31.827164    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:11:31.830170    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:11:31.861163    7364 logs.go:282] 0 containers: []
	W1210 07:11:31.861163    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:31.864163    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:11:31.894165    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:11:31.897176    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:11:31.924435    7364 logs.go:282] 0 containers: []
	W1210 07:11:31.924435    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:31.928334    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:11:31.958868    7364 logs.go:282] 0 containers: []
	W1210 07:11:31.958868    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:11:31.958868    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:31.958868    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:32.020882    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:32.020882    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:32.102248    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:32.102304    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:11:32.102351    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:11:32.151453    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:11:32.151453    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:11:32.193281    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:11:32.193281    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:11:32.235930    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:11:32.235930    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:11:32.278507    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:11:32.278570    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:11:32.311820    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:11:32.311820    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:32.370628    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:32.370628    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:34.912601    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:34.936544    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:11:34.970033    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:11:34.973897    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:11:35.004490    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:11:35.007815    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:11:35.036498    7364 logs.go:282] 0 containers: []
	W1210 07:11:35.036498    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:11:35.042952    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:11:35.077555    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:11:35.083922    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:11:35.128343    7364 logs.go:282] 0 containers: []
	W1210 07:11:35.128343    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:35.132404    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:11:35.167935    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:11:35.172650    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:11:35.200292    7364 logs.go:282] 0 containers: []
	W1210 07:11:35.200292    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:35.205864    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:11:35.235793    7364 logs.go:282] 0 containers: []
	W1210 07:11:35.235835    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:11:35.235835    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:35.235890    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:35.316906    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:35.316906    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:11:35.316906    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:11:35.365805    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:11:35.365805    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:11:35.412202    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:11:35.413202    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:11:35.449298    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:11:35.449298    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:35.508704    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:35.508704    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:35.586956    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:35.586956    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:35.625970    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:11:35.625970    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:11:35.673267    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:11:35.673267    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:11:38.218824    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:38.240333    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:11:38.273654    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:11:38.277757    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:11:38.310205    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:11:38.314447    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:11:38.345690    7364 logs.go:282] 0 containers: []
	W1210 07:11:38.345690    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:11:38.350335    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:11:38.378866    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:11:38.381925    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:11:38.408512    7364 logs.go:282] 0 containers: []
	W1210 07:11:38.408512    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:38.411512    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:11:38.441925    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:11:38.445663    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:11:38.478668    7364 logs.go:282] 0 containers: []
	W1210 07:11:38.478668    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:38.482673    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:11:38.513137    7364 logs.go:282] 0 containers: []
	W1210 07:11:38.513137    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:11:38.513137    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:11:38.513137    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:11:38.580378    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:11:38.580378    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:11:38.616159    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:11:38.616159    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:38.669230    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:38.669230    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:38.733699    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:38.733699    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:38.810501    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:38.810501    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:11:38.810501    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:11:38.850671    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:11:38.850671    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:11:38.895553    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:11:38.896554    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:11:38.929792    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:38.929792    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:41.470331    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:41.498661    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:11:41.533607    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:11:41.537629    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:11:41.569615    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:11:41.572608    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:11:41.604419    7364 logs.go:282] 0 containers: []
	W1210 07:11:41.604419    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:11:41.608929    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:11:41.640717    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:11:41.643992    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:11:41.679681    7364 logs.go:282] 0 containers: []
	W1210 07:11:41.679681    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:41.683389    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:11:41.714590    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:11:41.718818    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:11:41.749809    7364 logs.go:282] 0 containers: []
	W1210 07:11:41.749809    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:41.753641    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:11:41.783990    7364 logs.go:282] 0 containers: []
	W1210 07:11:41.783990    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:11:41.783990    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:41.783990    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:41.821991    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:41.821991    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:41.904212    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:41.904212    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:11:41.904212    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:11:41.949393    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:11:41.949393    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:11:41.989273    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:11:41.989273    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:11:42.033981    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:11:42.033981    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:11:42.068886    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:11:42.068886    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:42.117335    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:11:42.117335    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:11:42.158032    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:42.158119    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:44.729890    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:44.752874    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:11:44.788896    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:11:44.792880    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:11:44.831879    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:11:44.835882    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:11:44.872892    7364 logs.go:282] 0 containers: []
	W1210 07:11:44.873877    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:11:44.876887    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:11:44.914885    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:11:44.920890    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:11:44.954881    7364 logs.go:282] 0 containers: []
	W1210 07:11:44.954881    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:44.958880    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:11:44.994874    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:11:44.997885    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:11:45.033886    7364 logs.go:282] 0 containers: []
	W1210 07:11:45.033886    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:45.036911    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:11:45.071897    7364 logs.go:282] 0 containers: []
	W1210 07:11:45.071897    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:11:45.071897    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:45.071897    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:45.135877    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:45.135877    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:45.174303    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:11:45.174303    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:11:45.227594    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:11:45.227594    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:45.283233    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:45.283233    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:45.364870    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:45.364870    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:11:45.364870    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:11:45.409422    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:11:45.409422    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:11:45.457836    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:11:45.457836    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:11:45.494197    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:11:45.494197    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:11:48.031729    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:48.056495    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:11:48.088974    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:11:48.092619    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:11:48.125476    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:11:48.129187    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:11:48.162992    7364 logs.go:282] 0 containers: []
	W1210 07:11:48.163043    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:11:48.167928    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:11:48.202771    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:11:48.207393    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:11:48.239011    7364 logs.go:282] 0 containers: []
	W1210 07:11:48.239011    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:48.243526    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:11:48.273942    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:11:48.278629    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:11:48.305617    7364 logs.go:282] 0 containers: []
	W1210 07:11:48.305617    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:48.309082    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:11:48.339742    7364 logs.go:282] 0 containers: []
	W1210 07:11:48.339742    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:11:48.339742    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:11:48.339742    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:48.393286    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:48.393331    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:48.453841    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:48.453841    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:48.494869    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:11:48.494869    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:11:48.539087    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:48.539149    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:48.622453    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:48.622453    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:11:48.622453    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:11:48.673458    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:11:48.673458    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:11:48.718653    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:11:48.718653    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:11:48.755159    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:11:48.755159    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:11:51.292058    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:51.315345    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:11:51.347499    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:11:51.352538    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:11:51.382796    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:11:51.387723    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:11:51.424842    7364 logs.go:282] 0 containers: []
	W1210 07:11:51.424842    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:11:51.429154    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:11:51.462875    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:11:51.467229    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:11:51.503469    7364 logs.go:282] 0 containers: []
	W1210 07:11:51.503469    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:51.507701    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:11:51.542658    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:11:51.546613    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:11:51.577644    7364 logs.go:282] 0 containers: []
	W1210 07:11:51.577644    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:51.581594    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:11:51.611190    7364 logs.go:282] 0 containers: []
	W1210 07:11:51.611260    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:11:51.611260    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:51.611305    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:51.651646    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:11:51.651646    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:11:51.702483    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:11:51.702483    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:11:51.754075    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:11:51.754075    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:11:51.799305    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:51.799360    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:51.871943    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:51.871943    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:51.978740    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:51.978771    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:11:51.978771    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:11:52.056853    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:11:52.056853    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:11:52.094890    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:11:52.094890    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:54.653688    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:54.677167    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:11:54.714455    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:11:54.718419    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:11:54.752725    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:11:54.757236    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:11:54.791918    7364 logs.go:282] 0 containers: []
	W1210 07:11:54.791918    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:11:54.795918    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:11:54.830598    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:11:54.834266    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:11:54.870253    7364 logs.go:282] 0 containers: []
	W1210 07:11:54.870253    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:54.875503    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:11:54.914242    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:11:54.917667    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:11:54.952768    7364 logs.go:282] 0 containers: []
	W1210 07:11:54.952870    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:54.956814    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:11:54.988412    7364 logs.go:282] 0 containers: []
	W1210 07:11:54.988412    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:11:54.988412    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:11:54.988412    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:11:55.048356    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:11:55.048356    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:11:55.087349    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:11:55.087349    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:55.145209    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:55.145209    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:55.233512    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:55.233658    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:11:55.233658    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:11:55.287277    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:55.287277    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:55.356883    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:55.356883    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:55.401382    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:11:55.401382    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:11:55.454703    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:11:55.454703    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:11:58.015581    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:11:58.042162    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:11:58.075914    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:11:58.079888    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:11:58.112164    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:11:58.115282    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:11:58.144847    7364 logs.go:282] 0 containers: []
	W1210 07:11:58.144847    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:11:58.148859    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:11:58.187213    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:11:58.190211    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:11:58.218208    7364 logs.go:282] 0 containers: []
	W1210 07:11:58.218208    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:11:58.222207    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:11:58.258213    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:11:58.262216    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:11:58.292670    7364 logs.go:282] 0 containers: []
	W1210 07:11:58.293684    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:11:58.296677    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:11:58.327408    7364 logs.go:282] 0 containers: []
	W1210 07:11:58.327408    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:11:58.327408    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:11:58.327408    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:11:58.373914    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:11:58.373914    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:11:58.406621    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:11:58.406621    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:11:58.459236    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:11:58.459236    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:11:58.529572    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:11:58.529572    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:11:58.613586    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:11:58.613665    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:11:58.613696    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:11:58.658022    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:11:58.658022    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:11:58.711463    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:11:58.711463    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:11:58.754828    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:11:58.755832    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:12:01.311633    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:01.330639    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:12:01.367644    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:12:01.370638    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:12:01.400637    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:12:01.403633    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:12:01.431453    7364 logs.go:282] 0 containers: []
	W1210 07:12:01.431453    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:12:01.437210    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:12:01.469396    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:12:01.472395    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:12:01.499396    7364 logs.go:282] 0 containers: []
	W1210 07:12:01.499396    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:12:01.502395    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:12:01.532196    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:12:01.537201    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:12:01.572206    7364 logs.go:282] 0 containers: []
	W1210 07:12:01.572206    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:12:01.575198    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:12:01.604718    7364 logs.go:282] 0 containers: []
	W1210 07:12:01.604718    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:12:01.604718    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:12:01.604718    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:12:01.658706    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:12:01.658706    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:12:01.703962    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:12:01.703962    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:12:01.748155    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:12:01.748155    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:12:01.790140    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:12:01.790140    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:12:01.846624    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:12:01.846718    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:12:01.910606    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:12:01.910606    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:12:01.956965    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:12:01.956965    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:12:02.050719    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:12:02.050719    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:12:02.051505    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:12:04.608955    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:04.643062    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:12:04.684704    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:12:04.688697    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:12:04.722617    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:12:04.726602    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:12:04.773193    7364 logs.go:282] 0 containers: []
	W1210 07:12:04.773193    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:12:04.777203    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:12:04.815364    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:12:04.818347    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:12:04.853721    7364 logs.go:282] 0 containers: []
	W1210 07:12:04.853721    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:12:04.859343    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:12:04.891476    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:12:04.894477    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:12:04.927491    7364 logs.go:282] 0 containers: []
	W1210 07:12:04.927491    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:12:04.930478    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:12:04.965484    7364 logs.go:282] 0 containers: []
	W1210 07:12:04.965484    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:12:04.965484    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:12:04.965484    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:12:05.001487    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:12:05.001487    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:12:05.076701    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:12:05.076701    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:12:05.155685    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:12:05.156225    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:12:05.156225    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:12:05.210971    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:12:05.210971    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:12:05.258088    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:12:05.258088    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:12:05.305980    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:12:05.305980    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:12:05.340963    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:12:05.340963    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:12:05.403970    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:12:05.403970    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:12:07.950119    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:07.978619    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:12:08.022345    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:12:08.028488    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:12:08.058640    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:12:08.063987    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:12:08.098547    7364 logs.go:282] 0 containers: []
	W1210 07:12:08.098547    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:12:08.102336    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:12:08.140236    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:12:08.143997    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:12:08.173048    7364 logs.go:282] 0 containers: []
	W1210 07:12:08.173048    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:12:08.177146    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:12:08.215291    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:12:08.219200    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:12:08.261681    7364 logs.go:282] 0 containers: []
	W1210 07:12:08.261681    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:12:08.269188    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:12:08.304150    7364 logs.go:282] 0 containers: []
	W1210 07:12:08.304150    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:12:08.304150    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:12:08.304150    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:12:08.340997    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:12:08.340997    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:12:08.420513    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:12:08.420513    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:12:08.568337    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:12:08.568337    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:12:08.623852    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:12:08.623852    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:12:08.686412    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:12:08.686412    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:12:08.781588    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:12:08.781588    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:12:08.781588    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:12:08.845910    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:12:08.845910    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:12:08.892471    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:12:08.892471    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:12:11.452049    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:11.506352    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:12:11.559374    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:12:11.565356    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:12:11.610349    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:12:11.616362    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:12:11.676151    7364 logs.go:282] 0 containers: []
	W1210 07:12:11.676151    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:12:11.684189    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:12:11.769247    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:12:11.774044    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:12:11.990993    7364 logs.go:282] 0 containers: []
	W1210 07:12:11.990993    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:12:11.996015    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:12:12.151509    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:12:12.156527    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:12:12.290311    7364 logs.go:282] 0 containers: []
	W1210 07:12:12.290311    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:12:12.295239    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:12:12.403012    7364 logs.go:282] 0 containers: []
	W1210 07:12:12.403012    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:12:12.403012    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:12:12.403012    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:12:12.627337    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:12:12.627337    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:12:12.771123    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:12:12.771123    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:12:12.904913    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:12:12.904913    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:12:13.014770    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:12:13.014902    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:12:13.107276    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:12:13.107276    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:12:13.230263    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:12:13.230263    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:12:13.230263    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:12:13.291264    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:12:13.291264    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:12:13.354756    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:12:13.354756    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:12:15.915277    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:15.937940    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:12:15.971903    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:12:15.975896    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:12:16.012919    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:12:16.015889    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:12:16.055909    7364 logs.go:282] 0 containers: []
	W1210 07:12:16.056904    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:12:16.059895    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:12:16.095916    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:12:16.098920    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:12:16.129900    7364 logs.go:282] 0 containers: []
	W1210 07:12:16.129900    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:12:16.132905    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:12:16.162903    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:12:16.166919    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:12:16.196916    7364 logs.go:282] 0 containers: []
	W1210 07:12:16.196916    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:12:16.199906    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:12:16.228915    7364 logs.go:282] 0 containers: []
	W1210 07:12:16.228915    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:12:16.228915    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:12:16.228915    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:12:16.289895    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:12:16.289895    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:12:16.357691    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:12:16.357691    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:12:16.408688    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:12:16.408688    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:12:16.459693    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:12:16.459693    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:12:16.526686    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:12:16.526686    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:12:16.569694    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:12:16.570690    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:12:16.604703    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:12:16.604703    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:12:16.642694    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:12:16.642694    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:12:16.748093    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:12:19.257924    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:19.289799    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:12:19.327802    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:12:19.332800    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:12:19.372798    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:12:19.378796    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:12:19.414801    7364 logs.go:282] 0 containers: []
	W1210 07:12:19.414801    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:12:19.419801    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:12:19.450798    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:12:19.453810    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:12:19.482808    7364 logs.go:282] 0 containers: []
	W1210 07:12:19.482808    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:12:19.486803    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:12:19.515811    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:12:19.518801    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:12:19.548798    7364 logs.go:282] 0 containers: []
	W1210 07:12:19.548798    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:12:19.551803    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:12:19.578811    7364 logs.go:282] 0 containers: []
	W1210 07:12:19.578811    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:12:19.578811    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:12:19.578811    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:12:19.621809    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:12:19.621809    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:12:19.655811    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:12:19.655811    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:12:19.718815    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:12:19.718815    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:12:19.771807    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:12:19.771807    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:12:19.809512    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:12:19.809512    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:12:19.856506    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:12:19.856506    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:12:19.894502    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:12:19.894502    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:12:19.980517    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:12:19.981515    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:12:19.981515    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:12:22.530830    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:22.648827    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:12:22.741464    7364 logs.go:282] 1 containers: [c742f3ab058c]
	I1210 07:12:22.747552    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:12:22.796288    7364 logs.go:282] 1 containers: [67d9fa01b7aa]
	I1210 07:12:22.805914    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:12:22.855888    7364 logs.go:282] 0 containers: []
	W1210 07:12:22.855888    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:12:22.861897    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:12:22.936858    7364 logs.go:282] 1 containers: [5979722c7c1d]
	I1210 07:12:22.941867    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:12:22.993883    7364 logs.go:282] 0 containers: []
	W1210 07:12:22.993883    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:12:22.998868    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:12:23.042552    7364 logs.go:282] 1 containers: [b07ccc28ebf8]
	I1210 07:12:23.047549    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:12:23.104188    7364 logs.go:282] 0 containers: []
	W1210 07:12:23.105198    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:12:23.112333    7364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1210 07:12:23.163791    7364 logs.go:282] 0 containers: []
	W1210 07:12:23.163791    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:12:23.163791    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:12:23.163791    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:12:23.286276    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:12:23.286276    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:12:23.353994    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:12:23.353994    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:12:23.490561    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:12:23.490561    7364 logs.go:123] Gathering logs for kube-scheduler [5979722c7c1d] ...
	I1210 07:12:23.490561    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5979722c7c1d"
	I1210 07:12:23.556804    7364 logs.go:123] Gathering logs for kube-apiserver [c742f3ab058c] ...
	I1210 07:12:23.556804    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c742f3ab058c"
	I1210 07:12:23.636584    7364 logs.go:123] Gathering logs for etcd [67d9fa01b7aa] ...
	I1210 07:12:23.636584    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67d9fa01b7aa"
	I1210 07:12:23.710405    7364 logs.go:123] Gathering logs for kube-controller-manager [b07ccc28ebf8] ...
	I1210 07:12:23.710405    7364 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b07ccc28ebf8"
	I1210 07:12:23.770706    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:12:23.770706    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:12:23.819709    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:12:23.819709    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:12:26.400005    7364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:12:26.417026    7364 kubeadm.go:602] duration metric: took 4m2.9634904s to restartPrimaryControlPlane
	W1210 07:12:26.417026    7364 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 07:12:26.421023    7364 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1210 07:12:27.127659    7364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:12:27.151661    7364 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:12:27.165673    7364 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:12:27.169680    7364 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:12:27.182666    7364 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:12:27.182666    7364 kubeadm.go:158] found existing configuration files:
	
	I1210 07:12:27.188674    7364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:12:27.201680    7364 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:12:27.205659    7364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:12:27.225659    7364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:12:27.237670    7364 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:12:27.241661    7364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:12:27.266341    7364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:12:27.284670    7364 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:12:27.289661    7364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:12:27.309671    7364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:12:27.328658    7364 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:12:27.334664    7364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:12:27.356678    7364 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:12:27.414718    7364 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 07:12:27.415266    7364 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:12:27.562120    7364 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:12:27.562435    7364 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 07:12:27.562495    7364 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 07:12:27.562562    7364 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 07:12:27.562657    7364 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 07:12:27.562813    7364 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 07:12:27.563052    7364 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 07:12:27.563256    7364 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 07:12:27.563645    7364 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 07:12:27.563645    7364 kubeadm.go:319] CONFIG_INET: enabled
	I1210 07:12:27.563645    7364 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 07:12:27.563645    7364 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 07:12:27.563645    7364 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 07:12:27.564191    7364 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 07:12:27.564267    7364 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 07:12:27.564396    7364 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 07:12:27.564532    7364 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 07:12:27.564712    7364 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 07:12:27.564917    7364 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 07:12:27.565101    7364 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 07:12:27.565238    7364 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 07:12:27.565402    7364 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 07:12:27.565575    7364 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 07:12:27.565733    7364 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 07:12:27.565935    7364 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 07:12:27.566054    7364 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 07:12:27.566225    7364 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 07:12:27.566324    7364 kubeadm.go:319] OS: Linux
	I1210 07:12:27.566487    7364 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:12:27.566591    7364 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:12:27.566692    7364 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:12:27.566785    7364 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:12:27.566918    7364 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:12:27.566918    7364 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:12:27.566918    7364 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:12:27.566918    7364 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:12:27.566918    7364 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:12:27.666312    7364 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:12:27.667184    7364 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:12:27.667184    7364 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:12:27.694011    7364 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:12:27.696967    7364 out.go:252]   - Generating certificates and keys ...
	I1210 07:12:27.697204    7364 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:12:27.697204    7364 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:12:27.697204    7364 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:12:27.697204    7364 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:12:27.697945    7364 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:12:27.698198    7364 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:12:27.698412    7364 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:12:27.698677    7364 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:12:27.698793    7364 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:12:27.699229    7364 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:12:27.699347    7364 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:12:27.699670    7364 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:12:27.840656    7364 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:12:27.888386    7364 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:12:27.910545    7364 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:12:27.999772    7364 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:12:28.074041    7364 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:12:28.075078    7364 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:12:28.080609    7364 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:12:28.084483    7364 out.go:252]   - Booting up control plane ...
	I1210 07:12:28.084853    7364 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:12:28.084908    7364 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:12:28.088239    7364 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:12:28.111525    7364 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:12:28.111525    7364 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:12:28.122581    7364 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:12:28.122770    7364 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:12:28.122770    7364 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:12:28.320029    7364 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:12:28.320029    7364 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:16:28.302003    7364 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00149516s
	I1210 07:16:28.302003    7364 kubeadm.go:319] 
	I1210 07:16:28.302003    7364 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:16:28.302003    7364 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:16:28.303011    7364 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:16:28.303011    7364 kubeadm.go:319] 
	I1210 07:16:28.303011    7364 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:16:28.303011    7364 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:16:28.303011    7364 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:16:28.303011    7364 kubeadm.go:319] 
	I1210 07:16:28.307008    7364 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 07:16:28.308010    7364 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:16:28.308010    7364 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:16:28.309013    7364 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:16:28.309013    7364 kubeadm.go:319] 
	I1210 07:16:28.309013    7364 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:16:28.309013    7364 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00149516s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00149516s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:16:28.316004    7364 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1210 07:16:28.853702    7364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:16:28.885683    7364 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:16:28.892689    7364 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:16:28.911678    7364 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:16:28.911678    7364 kubeadm.go:158] found existing configuration files:
	
	I1210 07:16:28.918676    7364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:16:28.935677    7364 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:16:28.941676    7364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:16:28.966253    7364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:16:28.982515    7364 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:16:28.988515    7364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:16:29.010512    7364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:16:29.027523    7364 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:16:29.032522    7364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:16:29.052083    7364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:16:29.068190    7364 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:16:29.072199    7364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:16:29.087185    7364 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:16:29.214634    7364 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 07:16:29.306895    7364 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:16:29.409337    7364 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:20:30.136305    7364 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:20:30.136397    7364 kubeadm.go:319] 
	I1210 07:20:30.136484    7364 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:20:30.140292    7364 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 07:20:30.140544    7364 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:20:30.140877    7364 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:20:30.141166    7364 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 07:20:30.141427    7364 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 07:20:30.141634    7364 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 07:20:30.141796    7364 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 07:20:30.141899    7364 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 07:20:30.142106    7364 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 07:20:30.142265    7364 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 07:20:30.142533    7364 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 07:20:30.142753    7364 kubeadm.go:319] CONFIG_INET: enabled
	I1210 07:20:30.142753    7364 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 07:20:30.142753    7364 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 07:20:30.142753    7364 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 07:20:30.142753    7364 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 07:20:30.143293    7364 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 07:20:30.143450    7364 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 07:20:30.143450    7364 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 07:20:30.143450    7364 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 07:20:30.143450    7364 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 07:20:30.143985    7364 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 07:20:30.144136    7364 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 07:20:30.144136    7364 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 07:20:30.144136    7364 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 07:20:30.144775    7364 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 07:20:30.144903    7364 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 07:20:30.145154    7364 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 07:20:30.145329    7364 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 07:20:30.145329    7364 kubeadm.go:319] OS: Linux
	I1210 07:20:30.145329    7364 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:20:30.145329    7364 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:20:30.145329    7364 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:20:30.145848    7364 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:20:30.145907    7364 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:20:30.146007    7364 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:20:30.146118    7364 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:20:30.146179    7364 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:20:30.146279    7364 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:20:30.146372    7364 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:20:30.146558    7364 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:20:30.146802    7364 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:20:30.146802    7364 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:20:30.161130    7364 out.go:252]   - Generating certificates and keys ...
	I1210 07:20:30.161130    7364 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:20:30.161130    7364 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:20:30.161130    7364 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:20:30.161130    7364 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:20:30.162129    7364 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:20:30.162129    7364 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:20:30.162129    7364 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:20:30.162129    7364 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:20:30.162129    7364 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:20:30.162129    7364 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:20:30.163121    7364 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:20:30.163121    7364 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:20:30.163121    7364 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:20:30.163121    7364 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:20:30.164130    7364 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:20:30.164130    7364 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:20:30.164130    7364 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:20:30.164130    7364 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:20:30.164130    7364 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:20:30.171108    7364 out.go:252]   - Booting up control plane ...
	I1210 07:20:30.171108    7364 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:20:30.171108    7364 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:20:30.171108    7364 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:20:30.172111    7364 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:20:30.172111    7364 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:20:30.172111    7364 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:20:30.172111    7364 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:20:30.173124    7364 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:20:30.173124    7364 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:20:30.173124    7364 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:20:30.173124    7364 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001362378s
	I1210 07:20:30.173124    7364 kubeadm.go:319] 
	I1210 07:20:30.174112    7364 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:20:30.174112    7364 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:20:30.174112    7364 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:20:30.174112    7364 kubeadm.go:319] 
	I1210 07:20:30.174112    7364 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:20:30.174112    7364 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:20:30.174112    7364 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:20:30.174112    7364 kubeadm.go:319] 
	I1210 07:20:30.175118    7364 kubeadm.go:403] duration metric: took 12m6.7680758s to StartCluster
	I1210 07:20:30.175118    7364 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:20:30.178124    7364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:20:30.245782    7364 cri.go:89] found id: ""
	I1210 07:20:30.245782    7364 logs.go:282] 0 containers: []
	W1210 07:20:30.245782    7364 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:20:30.245782    7364 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:20:30.251806    7364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:20:30.302516    7364 cri.go:89] found id: ""
	I1210 07:20:30.302516    7364 logs.go:282] 0 containers: []
	W1210 07:20:30.302516    7364 logs.go:284] No container was found matching "etcd"
	I1210 07:20:30.302516    7364 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:20:30.306534    7364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:20:30.347516    7364 cri.go:89] found id: ""
	I1210 07:20:30.348513    7364 logs.go:282] 0 containers: []
	W1210 07:20:30.348513    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:20:30.348513    7364 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:20:30.351513    7364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:20:30.407540    7364 cri.go:89] found id: ""
	I1210 07:20:30.407540    7364 logs.go:282] 0 containers: []
	W1210 07:20:30.407540    7364 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:20:30.407540    7364 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:20:30.414871    7364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:20:30.457866    7364 cri.go:89] found id: ""
	I1210 07:20:30.457866    7364 logs.go:282] 0 containers: []
	W1210 07:20:30.457866    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:20:30.457866    7364 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:20:30.462879    7364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:20:30.510264    7364 cri.go:89] found id: ""
	I1210 07:20:30.510264    7364 logs.go:282] 0 containers: []
	W1210 07:20:30.510264    7364 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:20:30.510264    7364 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:20:30.514803    7364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:20:30.563001    7364 cri.go:89] found id: ""
	I1210 07:20:30.563001    7364 logs.go:282] 0 containers: []
	W1210 07:20:30.563001    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:20:30.563001    7364 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:20:30.566997    7364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:20:30.627920    7364 cri.go:89] found id: ""
	I1210 07:20:30.627920    7364 logs.go:282] 0 containers: []
	W1210 07:20:30.627920    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:20:30.627920    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:20:30.627920    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:20:30.690748    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:20:30.690748    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:20:30.732002    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:20:30.733002    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:20:30.823548    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:20:30.823548    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:20:30.823548    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:20:30.859653    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:20:30.859653    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:20:30.913402    7364 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001362378s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:20:30.913402    7364 out.go:285] * 
	* 
	W1210 07:20:30.913402    7364 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001362378s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001362378s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:20:30.913402    7364 out.go:285] * 
	* 
	W1210 07:20:30.917526    7364 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:20:30.927095    7364 out.go:203] 
	W1210 07:20:30.931100    7364 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001362378s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001362378s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:20:30.931100    7364 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:20:30.931100    7364 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:20:30.935101    7364 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-458400 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-458400 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-458400 version --output=json: exit status 1 (10.1404617s)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "34",
	    "gitVersion": "v1.34.3",
	    "gitCommit": "df11db1c0f08fab3c0baee1e5ce6efbf816af7f1",
	    "gitTreeState": "clean",
	    "buildDate": "2025-12-09T15:06:39Z",
	    "goVersion": "go1.24.11",
	    "compiler": "gc",
	    "platform": "windows/amd64"
	  },
	  "kustomizeVersion": "v5.7.1"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-10 07:20:42.1011629 +0000 UTC m=+6703.628971901
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-458400
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-458400:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d97c379c9cb758f687ceaf76650641db9085bec862b1e1ad12e6265882eed16b",
	        "Created": "2025-12-10T07:07:10.084973012Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272557,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:07:50.970047942Z",
	            "FinishedAt": "2025-12-10T07:07:47.759833815Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/d97c379c9cb758f687ceaf76650641db9085bec862b1e1ad12e6265882eed16b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d97c379c9cb758f687ceaf76650641db9085bec862b1e1ad12e6265882eed16b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d97c379c9cb758f687ceaf76650641db9085bec862b1e1ad12e6265882eed16b/hosts",
	        "LogPath": "/var/lib/docker/containers/d97c379c9cb758f687ceaf76650641db9085bec862b1e1ad12e6265882eed16b/d97c379c9cb758f687ceaf76650641db9085bec862b1e1ad12e6265882eed16b-json.log",
	        "Name": "/kubernetes-upgrade-458400",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-458400:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-458400",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a201f4101cc6cb9eca34726352eb8802cc3f8730e273babd490b26528902cdec-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a201f4101cc6cb9eca34726352eb8802cc3f8730e273babd490b26528902cdec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a201f4101cc6cb9eca34726352eb8802cc3f8730e273babd490b26528902cdec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a201f4101cc6cb9eca34726352eb8802cc3f8730e273babd490b26528902cdec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-458400",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-458400/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-458400",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-458400",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-458400",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a274a8ef4631179057e5e56654e1e0f54e6d53f35a2348b781571fa50c818daf",
	            "SandboxKey": "/var/run/docker/netns/a274a8ef4631",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55050"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55051"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55052"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-458400": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "10dc65ad3e73402b8ad2a0047dd3aaf5e2e55949d7c4ec2e5acb74a354f8e528",
	                    "EndpointID": "82e43fb0f004b5996b94c5f5eac09b5e0979bb36afa843ba3a6540d37cdf281a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-458400",
	                        "d97c379c9cb7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-458400 -n kubernetes-upgrade-458400
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-458400 -n kubernetes-upgrade-458400: exit status 2 (651.5136ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-458400 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p kubernetes-upgrade-458400 logs -n 25: (1.2216883s)
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                           │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable dashboard -p old-k8s-version-412400 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                        │ old-k8s-version-412400       │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:16 UTC │ 10 Dec 25 07:16 UTC │
	│ start   │ -p old-k8s-version-412400 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0      │ old-k8s-version-412400       │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:16 UTC │ 10 Dec 25 07:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-757000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                 │ embed-certs-757000           │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:16 UTC │ 10 Dec 25 07:16 UTC │
	│ stop    │ -p embed-certs-757000 --alsologtostderr -v=3                                                                                                                                                                             │ embed-certs-757000           │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:16 UTC │ 10 Dec 25 07:17 UTC │
	│ delete  │ -p cert-expiration-804900                                                                                                                                                                                                │ cert-expiration-804900       │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:17 UTC │ 10 Dec 25 07:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-757000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                            │ embed-certs-757000           │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:17 UTC │ 10 Dec 25 07:17 UTC │
	│ start   │ -p embed-certs-757000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.3                                                                                             │ embed-certs-757000           │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:17 UTC │ 10 Dec 25 07:18 UTC │
	│ delete  │ -p disable-driver-mounts-768900                                                                                                                                                                                          │ disable-driver-mounts-768900 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:17 UTC │ 10 Dec 25 07:17 UTC │
	│ start   │ -p no-preload-099700 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-099700            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:17 UTC │                     │
	│ image   │ old-k8s-version-412400 image list --format=json                                                                                                                                                                          │ old-k8s-version-412400       │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:17 UTC │ 10 Dec 25 07:17 UTC │
	│ pause   │ -p old-k8s-version-412400 --alsologtostderr -v=1                                                                                                                                                                         │ old-k8s-version-412400       │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:17 UTC │ 10 Dec 25 07:17 UTC │
	│ unpause │ -p old-k8s-version-412400 --alsologtostderr -v=1                                                                                                                                                                         │ old-k8s-version-412400       │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:17 UTC │ 10 Dec 25 07:17 UTC │
	│ delete  │ -p old-k8s-version-412400                                                                                                                                                                                                │ old-k8s-version-412400       │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:17 UTC │ 10 Dec 25 07:18 UTC │
	│ delete  │ -p old-k8s-version-412400                                                                                                                                                                                                │ old-k8s-version-412400       │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:18 UTC │ 10 Dec 25 07:18 UTC │
	│ start   │ -p default-k8s-diff-port-144100 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-144100 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:18 UTC │ 10 Dec 25 07:19 UTC │
	│ image   │ embed-certs-757000 image list --format=json                                                                                                                                                                              │ embed-certs-757000           │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:18 UTC │ 10 Dec 25 07:18 UTC │
	│ pause   │ -p embed-certs-757000 --alsologtostderr -v=1                                                                                                                                                                             │ embed-certs-757000           │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:18 UTC │ 10 Dec 25 07:18 UTC │
	│ unpause │ -p embed-certs-757000 --alsologtostderr -v=1                                                                                                                                                                             │ embed-certs-757000           │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:18 UTC │ 10 Dec 25 07:18 UTC │
	│ delete  │ -p embed-certs-757000                                                                                                                                                                                                    │ embed-certs-757000           │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:18 UTC │ 10 Dec 25 07:18 UTC │
	│ delete  │ -p embed-certs-757000                                                                                                                                                                                                    │ embed-certs-757000           │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:18 UTC │ 10 Dec 25 07:18 UTC │
	│ start   │ -p newest-cni-525200 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-rc.1 │ newest-cni-525200            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:18 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-144100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                       │ default-k8s-diff-port-144100 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:20 UTC │
	│ stop    │ -p default-k8s-diff-port-144100 --alsologtostderr -v=3                                                                                                                                                                   │ default-k8s-diff-port-144100 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:20 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-144100 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                  │ default-k8s-diff-port-144100 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │ 10 Dec 25 07:20 UTC │
	│ start   │ -p default-k8s-diff-port-144100 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-144100 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:20 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:20:21
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:20:21.621172   10884 out.go:360] Setting OutFile to fd 1648 ...
	I1210 07:20:21.664592   10884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:20:21.664634   10884 out.go:374] Setting ErrFile to fd 984...
	I1210 07:20:21.664634   10884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:20:21.678573   10884 out.go:368] Setting JSON to false
	I1210 07:20:21.681145   10884 start.go:133] hostinfo: {"hostname":"minikube4","uptime":10153,"bootTime":1765341068,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 07:20:21.681145   10884 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 07:20:21.684710   10884 out.go:179] * [default-k8s-diff-port-144100] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 07:20:21.686821   10884 notify.go:221] Checking for updates...
	I1210 07:20:21.689153   10884 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:20:21.690827   10884 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:20:21.692999   10884 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 07:20:21.695271   10884 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:20:21.697643   10884 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:20:21.699426   10884 config.go:182] Loaded profile config "default-k8s-diff-port-144100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:20:21.700436   10884 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:20:21.818689   10884 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 07:20:21.822731   10884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:20:22.044116   10884 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:99 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:20:22.026427106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:20:22.057722   10884 out.go:179] * Using the docker driver based on existing profile
	I1210 07:20:22.061896   10884 start.go:309] selected driver: docker
	I1210 07:20:22.061896   10884 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-144100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-144100 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.112.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:20:22.061896   10884 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:20:22.161835   10884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:20:22.394477   10884 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:99 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:20:22.376299068 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:20:22.395470   10884 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:20:22.395550   10884 cni.go:84] Creating CNI manager for ""
	I1210 07:20:22.395550   10884 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 07:20:22.395550   10884 start.go:353] cluster config:
	{Name:default-k8s-diff-port-144100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-144100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.112.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:20:22.398823   10884 out.go:179] * Starting "default-k8s-diff-port-144100" primary control-plane node in "default-k8s-diff-port-144100" cluster
	I1210 07:20:22.409187   10884 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 07:20:22.419797   10884 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:20:22.423059   10884 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:20:22.423059   10884 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 07:20:22.460603   10884 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:20:22.500856   10884 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:20:22.500856   10884 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:20:22.706185   10884 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:20:22.706185   10884 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-144100\config.json ...
	I1210 07:20:22.706185   10884 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:20:22.706185   10884 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:20:22.706185   10884 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:20:22.706185   10884 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:20:22.706185   10884 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:20:22.706754   10884 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:20:22.706185   10884 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:20:22.706185   10884 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:20:22.708886   10884 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:20:22.709458   10884 start.go:360] acquireMachinesLock for default-k8s-diff-port-144100: {Name:mk4849702c3f7a1dd9ae3b091c3f10fb6242d8da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:20:22.709722   10884 start.go:364] duration metric: took 184.6µs to acquireMachinesLock for "default-k8s-diff-port-144100"
	I1210 07:20:22.709722   10884 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:20:22.709722   10884 fix.go:54] fixHost starting: 
	I1210 07:20:22.719614   10884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-144100 --format={{.State.Status}}
	I1210 07:20:22.812355   10884 fix.go:112] recreateIfNeeded on default-k8s-diff-port-144100: state=Stopped err=<nil>
	W1210 07:20:22.812413   10884 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:20:22.828438   10884 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-144100" ...
	I1210 07:20:22.836269   10884 cli_runner.go:164] Run: docker start default-k8s-diff-port-144100
	I1210 07:20:24.006300   10884 cli_runner.go:217] Completed: docker start default-k8s-diff-port-144100: (1.1700124s)
	I1210 07:20:24.013880   10884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-144100 --format={{.State.Status}}
	I1210 07:20:24.295996   10884 kic.go:430] container "default-k8s-diff-port-144100" state is running.
	I1210 07:20:24.306463   10884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-144100
	I1210 07:20:24.617731   10884 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-144100\config.json ...
	I1210 07:20:24.619320   10884 machine.go:94] provisionDockerMachine start ...
	I1210 07:20:24.624101   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-144100
	I1210 07:20:24.754588   10884 main.go:143] libmachine: Using SSH client type: native
	I1210 07:20:24.754588   10884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 56463 <nil> <nil>}
	I1210 07:20:24.754588   10884 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:20:24.757588   10884 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:20:25.776746   10884 cache.go:107] acquiring lock: {Name:mk2c17fa70a505b9214cef4e32cab64efc0821c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:20:25.776809   10884 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 exists
	I1210 07:20:25.776809   10884 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.34.3" took 3.0699666s
	I1210 07:20:25.776809   10884 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 succeeded
	I1210 07:20:25.780031   10884 cache.go:107] acquiring lock: {Name:mk6b392ee3857c8c549222e5d4bc0d459f0d6374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:20:25.780586   10884 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 exists
	I1210 07:20:25.780586   10884 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.12.1" took 3.0737847s
	I1210 07:20:25.780586   10884 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 succeeded
	I1210 07:20:25.824441   10884 cache.go:107] acquiring lock: {Name:mk4347e5873954a8abe84535c3dbf0018de433f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:20:25.825147   10884 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 exists
	I1210 07:20:25.825366   10884 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.34.3" took 3.1191321s
	I1210 07:20:25.825430   10884 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 succeeded
	I1210 07:20:25.855997   10884 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:20:25.856136   10884 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1210 07:20:25.856136   10884 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.1499017s
	I1210 07:20:25.856136   10884 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1210 07:20:25.858775   10884 cache.go:107] acquiring lock: {Name:mk712909f8cebe825fb8a435a44c90c8d2ca7c32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:20:25.858775   10884 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 exists
	I1210 07:20:25.858775   10884 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.34.3" took 3.1519313s
	I1210 07:20:25.859309   10884 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 succeeded
	I1210 07:20:25.859985   10884 cache.go:107] acquiring lock: {Name:mkc3ba12d2dc6e754bbb22b72e061ebdaf85fee6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:20:25.860692   10884 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 exists
	I1210 07:20:25.860692   10884 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.34.3" took 3.1538487s
	I1210 07:20:25.860692   10884 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 succeeded
	I1210 07:20:25.868483   10884 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:20:25.868483   10884 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1210 07:20:25.868483   10884 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.16164s
	I1210 07:20:25.868483   10884 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1210 07:20:25.939836   10884 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:20:25.939836   10884 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1210 07:20:25.939836   10884 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.2336005s
	I1210 07:20:25.939836   10884 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1210 07:20:25.939836   10884 cache.go:87] Successfully saved all images to host disk.
	I1210 07:20:30.136305    7364 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:20:30.136397    7364 kubeadm.go:319] 
	I1210 07:20:30.136484    7364 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:20:30.140292    7364 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 07:20:30.140544    7364 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:20:30.140877    7364 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:20:30.141166    7364 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 07:20:30.141427    7364 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 07:20:30.141634    7364 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 07:20:30.141796    7364 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 07:20:30.141899    7364 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 07:20:30.142106    7364 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 07:20:30.142265    7364 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 07:20:30.142533    7364 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 07:20:30.142753    7364 kubeadm.go:319] CONFIG_INET: enabled
	I1210 07:20:30.142753    7364 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 07:20:30.142753    7364 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 07:20:30.142753    7364 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 07:20:30.142753    7364 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 07:20:30.143293    7364 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 07:20:30.143450    7364 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 07:20:30.143450    7364 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 07:20:30.143450    7364 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 07:20:30.143450    7364 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 07:20:30.143985    7364 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 07:20:30.144136    7364 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 07:20:30.144136    7364 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 07:20:30.144136    7364 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 07:20:30.144775    7364 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 07:20:30.144903    7364 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 07:20:30.145154    7364 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 07:20:30.145329    7364 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 07:20:30.145329    7364 kubeadm.go:319] OS: Linux
	I1210 07:20:30.145329    7364 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:20:30.145329    7364 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:20:30.145329    7364 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:20:30.145848    7364 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:20:30.145907    7364 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:20:30.146007    7364 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:20:30.146118    7364 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:20:30.146179    7364 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:20:30.146279    7364 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:20:30.146372    7364 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:20:30.146558    7364 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:20:30.146802    7364 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:20:30.146802    7364 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:20:30.161130    7364 out.go:252]   - Generating certificates and keys ...
	I1210 07:20:30.161130    7364 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:20:30.161130    7364 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:20:30.161130    7364 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:20:30.161130    7364 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:20:30.162129    7364 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:20:30.162129    7364 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:20:30.162129    7364 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:20:30.162129    7364 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:20:30.162129    7364 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:20:30.162129    7364 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:20:30.163121    7364 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:20:30.163121    7364 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:20:30.163121    7364 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:20:30.163121    7364 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:20:30.164130    7364 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:20:30.164130    7364 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:20:30.164130    7364 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:20:30.164130    7364 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:20:30.164130    7364 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:20:30.171108    7364 out.go:252]   - Booting up control plane ...
	I1210 07:20:30.171108    7364 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:20:30.171108    7364 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:20:30.171108    7364 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:20:30.172111    7364 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:20:30.172111    7364 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:20:30.172111    7364 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:20:30.172111    7364 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:20:30.173124    7364 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:20:30.173124    7364 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:20:30.173124    7364 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:20:30.173124    7364 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001362378s
	I1210 07:20:30.173124    7364 kubeadm.go:319] 
	I1210 07:20:30.174112    7364 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:20:30.174112    7364 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:20:30.174112    7364 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:20:30.174112    7364 kubeadm.go:319] 
	I1210 07:20:30.174112    7364 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:20:30.174112    7364 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:20:30.174112    7364 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:20:30.174112    7364 kubeadm.go:319] 
	I1210 07:20:30.175118    7364 kubeadm.go:403] duration metric: took 12m6.7680758s to StartCluster
	I1210 07:20:30.175118    7364 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:20:30.178124    7364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:20:30.245782    7364 cri.go:89] found id: ""
	I1210 07:20:30.245782    7364 logs.go:282] 0 containers: []
	W1210 07:20:30.245782    7364 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:20:30.245782    7364 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:20:30.251806    7364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:20:30.302516    7364 cri.go:89] found id: ""
	I1210 07:20:30.302516    7364 logs.go:282] 0 containers: []
	W1210 07:20:30.302516    7364 logs.go:284] No container was found matching "etcd"
	I1210 07:20:30.302516    7364 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:20:30.306534    7364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:20:30.347516    7364 cri.go:89] found id: ""
	I1210 07:20:30.348513    7364 logs.go:282] 0 containers: []
	W1210 07:20:30.348513    7364 logs.go:284] No container was found matching "coredns"
	I1210 07:20:30.348513    7364 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:20:30.351513    7364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:20:30.407540    7364 cri.go:89] found id: ""
	I1210 07:20:30.407540    7364 logs.go:282] 0 containers: []
	W1210 07:20:30.407540    7364 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:20:30.407540    7364 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:20:30.414871    7364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:20:30.457866    7364 cri.go:89] found id: ""
	I1210 07:20:30.457866    7364 logs.go:282] 0 containers: []
	W1210 07:20:30.457866    7364 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:20:30.457866    7364 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:20:30.462879    7364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:20:30.510264    7364 cri.go:89] found id: ""
	I1210 07:20:30.510264    7364 logs.go:282] 0 containers: []
	W1210 07:20:30.510264    7364 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:20:30.510264    7364 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:20:30.514803    7364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:20:30.563001    7364 cri.go:89] found id: ""
	I1210 07:20:30.563001    7364 logs.go:282] 0 containers: []
	W1210 07:20:30.563001    7364 logs.go:284] No container was found matching "kindnet"
	I1210 07:20:30.563001    7364 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 07:20:30.566997    7364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 07:20:30.627920    7364 cri.go:89] found id: ""
	I1210 07:20:30.627920    7364 logs.go:282] 0 containers: []
	W1210 07:20:30.627920    7364 logs.go:284] No container was found matching "storage-provisioner"
	I1210 07:20:30.627920    7364 logs.go:123] Gathering logs for kubelet ...
	I1210 07:20:30.627920    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:20:30.690748    7364 logs.go:123] Gathering logs for dmesg ...
	I1210 07:20:30.690748    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:20:30.732002    7364 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:20:30.733002    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:20:30.823548    7364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:20:30.823548    7364 logs.go:123] Gathering logs for Docker ...
	I1210 07:20:30.823548    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:20:30.859653    7364 logs.go:123] Gathering logs for container status ...
	I1210 07:20:30.859653    7364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:20:30.913402    7364 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001362378s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:20:30.913402    7364 out.go:285] * 
	W1210 07:20:30.913402    7364 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001362378s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:20:30.913402    7364 out.go:285] * 
	W1210 07:20:30.917526    7364 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:20:30.927095    7364 out.go:203] 
	W1210 07:20:30.931100    7364 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001362378s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:20:30.931100    7364 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:20:30.931100    7364 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:20:30.935101    7364 out.go:203] 
	I1210 07:20:27.944119   10884 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-144100
	
	I1210 07:20:27.944119   10884 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-144100"
	I1210 07:20:27.947209   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-144100
	I1210 07:20:28.006901   10884 main.go:143] libmachine: Using SSH client type: native
	I1210 07:20:28.006901   10884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 56463 <nil> <nil>}
	I1210 07:20:28.006901   10884 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-144100 && echo "default-k8s-diff-port-144100" | sudo tee /etc/hostname
	I1210 07:20:28.198324   10884 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-144100
	
	I1210 07:20:28.204863   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-144100
	I1210 07:20:28.264289   10884 main.go:143] libmachine: Using SSH client type: native
	I1210 07:20:28.265050   10884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 56463 <nil> <nil>}
	I1210 07:20:28.265050   10884 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-144100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-144100/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-144100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:20:28.445490   10884 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:20:28.445490   10884 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 07:20:28.445490   10884 ubuntu.go:190] setting up certificates
	I1210 07:20:28.445490   10884 provision.go:84] configureAuth start
	I1210 07:20:28.449609   10884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-144100
	I1210 07:20:28.504943   10884 provision.go:143] copyHostCerts
	I1210 07:20:28.505290   10884 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 07:20:28.505350   10884 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 07:20:28.505382   10884 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 07:20:28.505958   10884 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 07:20:28.506477   10884 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 07:20:28.506607   10884 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 07:20:28.507163   10884 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 07:20:28.507163   10884 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 07:20:28.507163   10884 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 07:20:28.507805   10884 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.default-k8s-diff-port-144100 san=[127.0.0.1 192.168.112.2 default-k8s-diff-port-144100 localhost minikube]
	I1210 07:20:28.556690   10884 provision.go:177] copyRemoteCerts
	I1210 07:20:28.560658   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:20:28.564152   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-144100
	I1210 07:20:28.621595   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56463 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-144100\id_rsa Username:docker}
	I1210 07:20:28.746735   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 07:20:28.776143   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:20:28.807436   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:20:28.839253   10884 provision.go:87] duration metric: took 393.7567ms to configureAuth
	I1210 07:20:28.839253   10884 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:20:28.839837   10884 config.go:182] Loaded profile config "default-k8s-diff-port-144100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:20:28.843146   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-144100
	I1210 07:20:28.901138   10884 main.go:143] libmachine: Using SSH client type: native
	I1210 07:20:28.901882   10884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 56463 <nil> <nil>}
	I1210 07:20:28.901917   10884 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 07:20:29.085700   10884 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 07:20:29.085734   10884 ubuntu.go:71] root file system type: overlay
	I1210 07:20:29.085787   10884 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 07:20:29.089368   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-144100
	I1210 07:20:29.147831   10884 main.go:143] libmachine: Using SSH client type: native
	I1210 07:20:29.148220   10884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 56463 <nil> <nil>}
	I1210 07:20:29.148220   10884 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 07:20:29.344650   10884 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 07:20:29.348323   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-144100
	I1210 07:20:29.411029   10884 main.go:143] libmachine: Using SSH client type: native
	I1210 07:20:29.411742   10884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 56463 <nil> <nil>}
	I1210 07:20:29.411776   10884 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 07:20:29.601607   10884 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:20:29.601607   10884 machine.go:97] duration metric: took 4.9822096s to provisionDockerMachine
	I1210 07:20:29.601607   10884 start.go:293] postStartSetup for "default-k8s-diff-port-144100" (driver="docker")
	I1210 07:20:29.601607   10884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:20:29.606080   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:20:29.608804   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-144100
	I1210 07:20:29.662371   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56463 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-144100\id_rsa Username:docker}
	I1210 07:20:29.794812   10884 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:20:29.801732   10884 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:20:29.801732   10884 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:20:29.801732   10884 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 07:20:29.801732   10884 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 07:20:29.801732   10884 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 07:20:29.806732   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:20:29.819872   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 07:20:29.854058   10884 start.go:296] duration metric: took 252.4474ms for postStartSetup
	I1210 07:20:29.858511   10884 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:20:29.861607   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-144100
	I1210 07:20:29.917686   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56463 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-144100\id_rsa Username:docker}
	I1210 07:20:30.049284   10884 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:20:30.058221   10884 fix.go:56] duration metric: took 7.3483846s for fixHost
	I1210 07:20:30.058221   10884 start.go:83] releasing machines lock for "default-k8s-diff-port-144100", held for 7.3483846s
	I1210 07:20:30.061725   10884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-144100
	I1210 07:20:30.114766   10884 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 07:20:30.120264   10884 ssh_runner.go:195] Run: cat /version.json
	I1210 07:20:30.120325   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-144100
	I1210 07:20:30.122653   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-144100
	I1210 07:20:30.177126   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56463 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-144100\id_rsa Username:docker}
	I1210 07:20:30.179125   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56463 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-144100\id_rsa Username:docker}
	W1210 07:20:30.294360   10884 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 07:20:30.299517   10884 ssh_runner.go:195] Run: systemctl --version
	I1210 07:20:30.315516   10884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:20:30.323519   10884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:20:30.327513   10884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:20:30.340527   10884 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:20:30.340527   10884 start.go:496] detecting cgroup driver to use...
	I1210 07:20:30.340527   10884 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:20:30.340527   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:20:30.368220   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:20:30.391229   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:20:30.407540   10884 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:20:30.413302   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:20:30.431868   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W1210 07:20:30.436871   10884 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 07:20:30.436871   10884 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 07:20:30.450877   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:20:30.468877   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:20:30.490546   10884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:20:30.507556   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:20:30.529622   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:20:30.550996   10884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:20:30.569720   10884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:20:30.589934   10884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:20:30.612014   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:20:30.772556   10884 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:20:30.941096   10884 start.go:496] detecting cgroup driver to use...
	I1210 07:20:30.941096   10884 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:20:30.946094   10884 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 07:20:30.977292   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:20:31.002276   10884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:20:31.075370   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:20:31.100603   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:20:31.121040   10884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:20:31.151240   10884 ssh_runner.go:195] Run: which cri-dockerd
	I1210 07:20:31.162967   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 07:20:31.176552   10884 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 07:20:31.202336   10884 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 07:20:31.366457   10884 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 07:20:31.530538   10884 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 07:20:31.531529   10884 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 07:20:31.554531   10884 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 07:20:31.576526   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:20:31.736541   10884 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 07:20:32.701420   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:20:32.724081   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 07:20:32.748692   10884 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1210 07:20:32.771834   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:20:32.796500   10884 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 07:20:32.943482   10884 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 07:20:33.098521   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:20:33.246471   10884 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 07:20:33.272251   10884 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 07:20:33.296667   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:20:33.433420   10884 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 07:20:33.568326   10884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:20:33.589954   10884 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 07:20:33.594688   10884 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 07:20:33.603554   10884 start.go:564] Will wait 60s for crictl version
	I1210 07:20:33.608222   10884 ssh_runner.go:195] Run: which crictl
	I1210 07:20:33.618665   10884 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:20:33.663936   10884 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 07:20:33.667919   10884 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:20:33.712078   10884 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:20:33.756178   10884 out.go:252] * Preparing Kubernetes v1.34.3 on Docker 29.1.2 ...
	I1210 07:20:33.760395   10884 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-144100 dig +short host.docker.internal
	I1210 07:20:33.902525   10884 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 07:20:33.907667   10884 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 07:20:33.916832   10884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:20:33.939513   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-144100
	I1210 07:20:34.003127   10884 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-144100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-144100 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.112.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:20:34.003127   10884 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:20:34.009583   10884 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 07:20:34.045688   10884 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.3
	registry.k8s.io/kube-controller-manager:v1.34.3
	registry.k8s.io/kube-scheduler:v1.34.3
	registry.k8s.io/kube-proxy:v1.34.3
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1210 07:20:34.045760   10884 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:20:34.045782   10884 kubeadm.go:935] updating node { 192.168.112.2 8444 v1.34.3 docker true true} ...
	I1210 07:20:34.045864   10884 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-144100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.112.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-144100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:20:34.049465   10884 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 07:20:34.130386   10884 cni.go:84] Creating CNI manager for ""
	I1210 07:20:34.130386   10884 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 07:20:34.130386   10884 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:20:34.130386   10884 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.112.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-144100 NodeName:default-k8s-diff-port-144100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.112.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.112.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:20:34.130386   10884 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.112.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-144100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.112.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.112.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:20:34.135375   10884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 07:20:34.150948   10884 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:20:34.155322   10884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:20:34.174141   10884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 07:20:34.197237   10884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 07:20:34.221602   10884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1210 07:20:34.247236   10884 ssh_runner.go:195] Run: grep 192.168.112.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:20:34.254523   10884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.112.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:20:34.274950   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:20:34.415792   10884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:20:34.438262   10884 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-144100 for IP: 192.168.112.2
	I1210 07:20:34.438262   10884 certs.go:195] generating shared ca certs ...
	I1210 07:20:34.438262   10884 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:20:34.439152   10884 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 07:20:34.439456   10884 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 07:20:34.439690   10884 certs.go:257] generating profile certs ...
	I1210 07:20:34.440300   10884 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-144100\client.key
	I1210 07:20:34.440606   10884 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-144100\apiserver.key.271206e4
	I1210 07:20:34.440875   10884 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-144100\proxy-client.key
	I1210 07:20:34.441758   10884 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 07:20:34.442029   10884 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 07:20:34.442104   10884 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 07:20:34.442307   10884 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 07:20:34.442478   10884 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 07:20:34.442649   10884 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 07:20:34.442649   10884 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 07:20:34.444044   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:20:34.471021   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:20:34.500646   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:20:34.529724   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:20:34.559267   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-144100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 07:20:34.587020   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-144100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:20:34.699076   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-144100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:20:34.796867   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-diff-port-144100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:20:34.830392   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 07:20:34.899767   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:20:34.932622   10884 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 07:20:34.965230   10884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:20:35.011209   10884 ssh_runner.go:195] Run: openssl version
	I1210 07:20:35.025106   10884 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:20:35.043864   10884 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:20:35.063150   10884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:20:35.071696   10884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:20:35.075946   10884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:20:35.127693   10884 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:20:35.144587   10884 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 07:20:35.161308   10884 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 07:20:35.179380   10884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 07:20:35.187977   10884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 07:20:35.192008   10884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 07:20:35.239855   10884 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:20:35.256692   10884 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 07:20:35.274755   10884 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 07:20:35.291407   10884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 07:20:35.300827   10884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 07:20:35.305234   10884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 07:20:35.354543   10884 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:20:35.373226   10884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:20:35.385920   10884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:20:35.439264   10884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:20:35.494454   10884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:20:35.545503   10884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:20:35.621296   10884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:20:35.821157   10884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:20:35.929276   10884 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-144100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-144100 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.112.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:20:35.934233   10884 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 07:20:36.013918   10884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:20:36.097956   10884 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:20:36.097956   10884 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:20:36.102654   10884 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:20:36.117323   10884 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:20:36.121089   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-144100
	I1210 07:20:36.183518   10884 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-144100" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:20:36.184289   10884 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-144100" cluster setting kubeconfig missing "default-k8s-diff-port-144100" context setting]
	I1210 07:20:36.184375   10884 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:20:36.209793   10884 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:20:36.226206   10884 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1210 07:20:36.226206   10884 kubeadm.go:602] duration metric: took 127.7166ms to restartPrimaryControlPlane
	I1210 07:20:36.226206   10884 kubeadm.go:403] duration metric: took 296.9252ms to StartCluster
	I1210 07:20:36.226206   10884 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:20:36.226206   10884 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:20:36.227222   10884 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:20:36.228198   10884 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.112.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:20:36.228198   10884 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:20:36.228198   10884 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-144100"
	I1210 07:20:36.228198   10884 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-144100"
	I1210 07:20:36.228198   10884 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-144100"
	I1210 07:20:36.228198   10884 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-144100"
	I1210 07:20:36.229205   10884 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-144100"
	W1210 07:20:36.229205   10884 addons.go:248] addon storage-provisioner should already be in state true
	I1210 07:20:36.229205   10884 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-144100"
	W1210 07:20:36.229205   10884 addons.go:248] addon metrics-server should already be in state true
	I1210 07:20:36.228198   10884 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-144100"
	I1210 07:20:36.229205   10884 host.go:66] Checking if "default-k8s-diff-port-144100" exists ...
	I1210 07:20:36.229205   10884 config.go:182] Loaded profile config "default-k8s-diff-port-144100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:20:36.229205   10884 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-144100"
	W1210 07:20:36.229205   10884 addons.go:248] addon dashboard should already be in state true
	I1210 07:20:36.229205   10884 host.go:66] Checking if "default-k8s-diff-port-144100" exists ...
	I1210 07:20:36.229205   10884 host.go:66] Checking if "default-k8s-diff-port-144100" exists ...
	I1210 07:20:36.240204   10884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-144100 --format={{.State.Status}}
	I1210 07:20:36.240204   10884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-144100 --format={{.State.Status}}
	I1210 07:20:36.240204   10884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-144100 --format={{.State.Status}}
	I1210 07:20:36.241197   10884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-144100 --format={{.State.Status}}
	I1210 07:20:36.243214   10884 out.go:179] * Verifying Kubernetes components...
	I1210 07:20:36.255211   10884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:20:36.304220   10884 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-144100"
	W1210 07:20:36.304220   10884 addons.go:248] addon default-storageclass should already be in state true
	I1210 07:20:36.304220   10884 host.go:66] Checking if "default-k8s-diff-port-144100" exists ...
	I1210 07:20:36.305211   10884 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 07:20:36.307196   10884 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:20:36.309204   10884 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:20:36.311197   10884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-144100 --format={{.State.Status}}
	I1210 07:20:36.314200   10884 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 07:20:36.314200   10884 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 07:20:36.317199   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-144100
	I1210 07:20:36.328206   10884 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:20:36.328206   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:20:36.331212   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-144100
	I1210 07:20:36.331212   10884 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:20:36.338212   10884 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:20:36.338212   10884 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:20:36.341242   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-144100
	I1210 07:20:36.375205   10884 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:20:36.375205   10884 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:20:36.376225   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56463 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-144100\id_rsa Username:docker}
	I1210 07:20:36.378210   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-144100
	I1210 07:20:36.388216   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56463 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-144100\id_rsa Username:docker}
	I1210 07:20:36.397203   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56463 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-144100\id_rsa Username:docker}
	I1210 07:20:36.436198   10884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56463 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-diff-port-144100\id_rsa Username:docker}
	I1210 07:20:36.803124   10884 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 07:20:36.803186   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 07:20:36.807672   10884 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:20:36.807672   10884 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:20:36.904057   10884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:20:37.000626   10884 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 07:20:37.000626   10884 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 07:20:37.001966   10884 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:20:37.001966   10884 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:20:37.014144   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:20:37.017151   10884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-144100
	I1210 07:20:37.068147   10884 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-144100" to be "Ready" ...
	I1210 07:20:37.110736   10884 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:20:37.110736   10884 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:20:37.110736   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:20:37.196770   10884 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 07:20:37.196899   10884 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 07:20:37.301163   10884 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:20:37.301163   10884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1210 07:20:37.408783   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 07:20:37.508151   10884 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:20:37.508151   10884 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 07:20:37.701274   10884 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:20:37.701319   10884 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W1210 07:20:37.709432   10884 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:20:37.709496   10884 retry.go:31] will retry after 348.329972ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:20:37.808436   10884 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:20:37.808436   10884 retry.go:31] will retry after 231.961026ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:20:37.895039   10884 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:20:37.895039   10884 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 07:20:37.996757   10884 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:20:37.996958   10884 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W1210 07:20:38.019857   10884 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:20:38.019857   10884 retry.go:31] will retry after 226.907653ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:20:38.043897   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:20:38.061849   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:20:38.096550   10884 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:20:38.096550   10884 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:20:38.126605   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:20:38.251829   10884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 07:20:41.406861   10884 node_ready.go:49] node "default-k8s-diff-port-144100" is "Ready"
	I1210 07:20:41.406931   10884 node_ready.go:38] duration metric: took 4.338717s for node "default-k8s-diff-port-144100" to be "Ready" ...
	I1210 07:20:41.406931   10884 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:20:41.410905   10884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	
	==> Docker <==
	Dec 10 07:08:12 kubernetes-upgrade-458400 systemd[1]: Starting docker.service - Docker Application Container Engine...
	Dec 10 07:08:12 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:12.262512271Z" level=info msg="Starting up"
	Dec 10 07:08:12 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:12.285465061Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	Dec 10 07:08:12 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:12.285692482Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	Dec 10 07:08:12 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:12.285774490Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	Dec 10 07:08:12 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:12.302020440Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	Dec 10 07:08:14 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:14.088646949Z" level=info msg="Loading containers: start."
	Dec 10 07:08:14 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:14.092845949Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 10 07:08:18 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:18.307370148Z" level=info msg="Restoring containers: start."
	Dec 10 07:08:18 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:18.424964351Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 10 07:08:18 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:18.474681099Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 10 07:08:18 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:18.809310003Z" level=info msg="Loading containers: done."
	Dec 10 07:08:18 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:18.835083068Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 10 07:08:18 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:18.835179578Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 10 07:08:18 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:18.835191979Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 10 07:08:18 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:18.835200380Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 07:08:18 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:18.835206080Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 07:08:18 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:18.835229683Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 07:08:18 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:18.835303390Z" level=info msg="Initializing buildkit"
	Dec 10 07:08:18 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:18.953188522Z" level=info msg="Completed buildkit initialization"
	Dec 10 07:08:18 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:18.968632659Z" level=info msg="Daemon has completed initialization"
	Dec 10 07:08:18 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:18.968925889Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 07:08:18 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:18.968945891Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 07:08:18 kubernetes-upgrade-458400 dockerd[1447]: time="2025-12-10T07:08:18.968982494Z" level=info msg="API listen on [::]:2376"
	Dec 10 07:08:18 kubernetes-upgrade-458400 systemd[1]: Started docker.service - Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +5.728386] CPU: 0 PID: 401770 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f2ea8465b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f2ea8465af6.
	[  +0.000001] RSP: 002b:00007ffd4ed459c0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.926797] CPU: 3 PID: 401933 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fe9f97eab20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fe9f97eaaf6.
	[  +0.000001] RSP: 002b:00007ffde346bd60 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +2.757478] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 07:20:43 up  2:48,  0 user,  load average: 3.18, 5.27, 4.44
	Linux kubernetes-upgrade-458400 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:20:40 kubernetes-upgrade-458400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:20:41 kubernetes-upgrade-458400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 331.
	Dec 10 07:20:41 kubernetes-upgrade-458400 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:20:41 kubernetes-upgrade-458400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:20:41 kubernetes-upgrade-458400 kubelet[25808]: E1210 07:20:41.469194   25808 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:20:41 kubernetes-upgrade-458400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:20:41 kubernetes-upgrade-458400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:20:42 kubernetes-upgrade-458400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 332.
	Dec 10 07:20:42 kubernetes-upgrade-458400 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:20:42 kubernetes-upgrade-458400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:20:42 kubernetes-upgrade-458400 kubelet[25820]: E1210 07:20:42.241756   25820 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:20:42 kubernetes-upgrade-458400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:20:42 kubernetes-upgrade-458400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:20:42 kubernetes-upgrade-458400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 333.
	Dec 10 07:20:42 kubernetes-upgrade-458400 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:20:42 kubernetes-upgrade-458400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:20:42 kubernetes-upgrade-458400 kubelet[25847]: E1210 07:20:42.983225   25847 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:20:42 kubernetes-upgrade-458400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:20:42 kubernetes-upgrade-458400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:20:43 kubernetes-upgrade-458400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 334.
	Dec 10 07:20:43 kubernetes-upgrade-458400 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:20:43 kubernetes-upgrade-458400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:20:43 kubernetes-upgrade-458400 kubelet[25941]: E1210 07:20:43.739987   25941 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:20:43 kubernetes-upgrade-458400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:20:43 kubernetes-upgrade-458400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-458400 -n kubernetes-upgrade-458400
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-458400 -n kubernetes-upgrade-458400: exit status 2 (612.3652ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-458400" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-458400" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-458400
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-458400: (2.9968098s)
--- FAIL: TestKubernetesUpgrade (833.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (543.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-099700 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-099700 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-rc.1: exit status 109 (9m0.3734207s)

                                                
                                                
-- stdout --
	* [no-preload-099700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "no-preload-099700" primary control-plane node in "no-preload-099700" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:17:07.610059   11224 out.go:360] Setting OutFile to fd 1396 ...
	I1210 07:17:07.665050   11224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:17:07.665050   11224 out.go:374] Setting ErrFile to fd 1600...
	I1210 07:17:07.665050   11224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:17:07.681048   11224 out.go:368] Setting JSON to false
	I1210 07:17:07.684056   11224 start.go:133] hostinfo: {"hostname":"minikube4","uptime":9959,"bootTime":1765341068,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 07:17:07.684056   11224 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 07:17:07.690057   11224 out.go:179] * [no-preload-099700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 07:17:07.692050   11224 notify.go:221] Checking for updates...
	I1210 07:17:07.695045   11224 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:17:07.702046   11224 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:17:07.704054   11224 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 07:17:07.708052   11224 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:17:07.713789   11224 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:17:07.717349   11224 config.go:182] Loaded profile config "embed-certs-757000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:17:07.717349   11224 config.go:182] Loaded profile config "kubernetes-upgrade-458400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:17:07.717961   11224 config.go:182] Loaded profile config "old-k8s-version-412400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I1210 07:17:07.717961   11224 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:17:07.836853   11224 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 07:17:07.840893   11224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:17:08.125955   11224 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:17:08.102821043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:17:08.129227   11224 out.go:179] * Using the docker driver based on user configuration
	I1210 07:17:08.130565   11224 start.go:309] selected driver: docker
	I1210 07:17:08.130565   11224 start.go:927] validating driver "docker" against <nil>
	I1210 07:17:08.130565   11224 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:17:08.176525   11224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:17:08.414010   11224 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:17:08.392710414 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:17:08.414010   11224 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:17:08.415010   11224 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:17:08.421595   11224 out.go:179] * Using Docker Desktop driver with root privileges
	I1210 07:17:08.423691   11224 cni.go:84] Creating CNI manager for ""
	I1210 07:17:08.423691   11224 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 07:17:08.423691   11224 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 07:17:08.423691   11224 start.go:353] cluster config:
	{Name:no-preload-099700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-099700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:17:08.431395   11224 out.go:179] * Starting "no-preload-099700" primary control-plane node in "no-preload-099700" cluster
	I1210 07:17:08.434392   11224 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 07:17:08.439085   11224 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:17:08.443494   11224 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 07:17:08.443494   11224 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 07:17:08.443683   11224 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\config.json ...
	I1210 07:17:08.443719   11224 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:17:08.443769   11224 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-rc.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-rc.1
	I1210 07:17:08.443769   11224 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-rc.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-rc.1
	I1210 07:17:08.443819   11224 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\config.json: {Name:mk071d7049b399ea6ae1a7a2ade14f31f4792567 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:17:08.443819   11224 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:17:08.443819   11224 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:17:08.444009   11224 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-rc.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-rc.1
	I1210 07:17:08.444093   11224 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-rc.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-rc.1
	I1210 07:17:08.443920   11224 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1210 07:17:08.754308   11224 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:17:08.754308   11224 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 07:17:08.754308   11224 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:17:08.754308   11224 start.go:360] acquireMachinesLock for no-preload-099700: {Name:mkc8e995140dc54401ffafd9be7c06a8281abfd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:17:08.754308   11224 start.go:364] duration metric: took 0s to acquireMachinesLock for "no-preload-099700"
	I1210 07:17:08.754308   11224 start.go:93] Provisioning new machine with config: &{Name:no-preload-099700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-099700 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:17:08.755307   11224 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:17:08.761789   11224 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:17:08.762462   11224 start.go:159] libmachine.API.Create for "no-preload-099700" (driver="docker")
	I1210 07:17:08.762462   11224 client.go:173] LocalClient.Create starting
	I1210 07:17:08.763098   11224 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1210 07:17:08.763098   11224 main.go:143] libmachine: Decoding PEM data...
	I1210 07:17:08.763098   11224 main.go:143] libmachine: Parsing certificate...
	I1210 07:17:08.763677   11224 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1210 07:17:08.763758   11224 main.go:143] libmachine: Decoding PEM data...
	I1210 07:17:08.763758   11224 main.go:143] libmachine: Parsing certificate...
	I1210 07:17:08.769684   11224 cli_runner.go:164] Run: docker network inspect no-preload-099700 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:17:08.848103   11224 cli_runner.go:211] docker network inspect no-preload-099700 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:17:08.853115   11224 network_create.go:284] running [docker network inspect no-preload-099700] to gather additional debugging logs...
	I1210 07:17:08.853115   11224 cli_runner.go:164] Run: docker network inspect no-preload-099700
	W1210 07:17:08.955601   11224 cli_runner.go:211] docker network inspect no-preload-099700 returned with exit code 1
	I1210 07:17:08.955601   11224 network_create.go:287] error running [docker network inspect no-preload-099700]: docker network inspect no-preload-099700: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-099700 not found
	I1210 07:17:08.955601   11224 network_create.go:289] output of [docker network inspect no-preload-099700]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-099700 not found
	
	** /stderr **
	I1210 07:17:08.970596   11224 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:17:09.927838   11224 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:17:10.090649   11224 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:17:10.309001   11224 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e0b650}
	I1210 07:17:10.309001   11224 network_create.go:124] attempt to create docker network no-preload-099700 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1210 07:17:10.315295   11224 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-099700 no-preload-099700
	W1210 07:17:10.512018   11224 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-099700 no-preload-099700 returned with exit code 1
	W1210 07:17:10.512018   11224 network_create.go:149] failed to create docker network no-preload-099700 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-099700 no-preload-099700: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1210 07:17:10.512018   11224 network_create.go:116] failed to create docker network no-preload-099700 192.168.67.0/24, will retry: subnet is taken
	I1210 07:17:10.591265   11224 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:17:10.638875   11224 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed9b90}
	I1210 07:17:10.639056   11224 network_create.go:124] attempt to create docker network no-preload-099700 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 07:17:10.644475   11224 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-099700 no-preload-099700
	W1210 07:17:11.000640   11224 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-099700 no-preload-099700 returned with exit code 1
	W1210 07:17:11.008137   11224 network_create.go:149] failed to create docker network no-preload-099700 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-099700 no-preload-099700: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1210 07:17:11.008137   11224 network_create.go:116] failed to create docker network no-preload-099700 192.168.76.0/24, will retry: subnet is taken
	I1210 07:17:11.063835   11224 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:17:11.103153   11224 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e58450}
	I1210 07:17:11.103153   11224 network_create.go:124] attempt to create docker network no-preload-099700 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1210 07:17:11.108994   11224 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-099700 no-preload-099700
	I1210 07:17:11.282425   11224 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:17:11.282425   11224 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1210 07:17:11.282425   11224 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 2.8383725s
	I1210 07:17:11.283004   11224 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	W1210 07:17:11.305191   11224 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-099700 no-preload-099700 returned with exit code 1
	W1210 07:17:11.305742   11224 network_create.go:149] failed to create docker network no-preload-099700 192.168.85.0/24 with gateway 192.168.85.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-099700 no-preload-099700: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1210 07:17:11.305784   11224 network_create.go:116] failed to create docker network no-preload-099700 192.168.85.0/24, will retry: subnet is taken
	I1210 07:17:11.333190   11224 cache.go:107] acquiring lock: {Name:mkbb0c8fa4da62a80ed9d6679bee657142469def Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:17:11.335372   11224 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:17:11.338995   11224 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:17:11.340660   11224 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:17:11.345469   11224 cache.go:107] acquiring lock: {Name:mk732492e3e0368b966de7b10f5eb5a7a6586537 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:17:11.347468   11224 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:17:11.348483   11224 cache.go:107] acquiring lock: {Name:mk16b9d3dcf33fab9768fe75991ea4fd479f5b62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:17:11.348483   11224 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:17:11.356496   11224 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:17:11.360465   11224 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:17:11.363485   11224 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:17:11.366476   11224 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:17:11.374484   11224 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:17:11.378471   11224 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:17:11.379482   11224 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1210 07:17:11.379482   11224 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 2.9355161s
	I1210 07:17:11.379482   11224 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1210 07:17:11.392480   11224 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001deeb70}
	I1210 07:17:11.392480   11224 network_create.go:124] attempt to create docker network no-preload-099700 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1210 07:17:11.394468   11224 cache.go:107] acquiring lock: {Name:mkcf25f639af7f4007c4b4fab61572d5959a6d86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:17:11.395473   11224 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:17:11.398493   11224 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-099700 no-preload-099700
	I1210 07:17:11.406477   11224 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	W1210 07:17:11.438482   11224 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 07:17:11.451469   11224 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:17:11.451469   11224 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1210 07:17:11.451469   11224 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.0077039s
	I1210 07:17:11.451469   11224 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	W1210 07:17:11.464470   11224 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-099700 no-preload-099700 returned with exit code 1
	W1210 07:17:11.464470   11224 network_create.go:149] failed to create docker network no-preload-099700 192.168.94.0/24 with gateway 192.168.94.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-099700 no-preload-099700: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1210 07:17:11.464470   11224 network_create.go:116] failed to create docker network no-preload-099700 192.168.94.0/24, will retry: subnet is taken
	I1210 07:17:11.494475   11224 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	W1210 07:17:11.501469   11224 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 07:17:11.513478   11224 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f27b60}
	I1210 07:17:11.513478   11224 network_create.go:124] attempt to create docker network no-preload-099700 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1210 07:17:11.517473   11224 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-099700 no-preload-099700
	W1210 07:17:11.559470   11224 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:17:11.625478   11224 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 07:17:11.687471   11224 network_create.go:108] docker network no-preload-099700 192.168.103.0/24 created
	I1210 07:17:11.687471   11224 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-099700" container
	W1210 07:17:11.690483   11224 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 07:17:11.701488   11224 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:17:11.766480   11224 cli_runner.go:164] Run: docker volume create no-preload-099700 --label name.minikube.sigs.k8s.io=no-preload-099700 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:17:11.825476   11224 oci.go:103] Successfully created a docker volume no-preload-099700
	I1210 07:17:11.828475   11224 cli_runner.go:164] Run: docker run --rm --name no-preload-099700-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-099700 --entrypoint /usr/bin/test -v no-preload-099700:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 07:17:11.833477   11224 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-rc.1
	I1210 07:17:11.834474   11224 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1210 07:17:11.838499   11224 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-rc.1
	I1210 07:17:11.889474   11224 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-rc.1
	I1210 07:17:11.894488   11224 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-rc.1
	I1210 07:17:12.846195   11224 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-rc.1 exists
	I1210 07:17:12.846843   11224 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-rc.1" took 4.4025629s
	I1210 07:17:12.846895   11224 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-rc.1 succeeded
	I1210 07:17:13.259500   11224 cli_runner.go:217] Completed: docker run --rm --name no-preload-099700-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-099700 --entrypoint /usr/bin/test -v no-preload-099700:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.4310032s)
	I1210 07:17:13.259500   11224 oci.go:107] Successfully prepared a docker volume no-preload-099700
	I1210 07:17:13.259500   11224 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 07:17:13.263508   11224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:17:13.490516   11224 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-rc.1 exists
	I1210 07:17:13.490516   11224 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-rc.1" took 5.0462891s
	I1210 07:17:13.490516   11224 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 07:17:13.535507   11224 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:17:13.514353332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:17:13.540511   11224 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:17:13.626117   11224 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1210 07:17:13.627115   11224 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 5.1828171s
	I1210 07:17:13.627115   11224 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1210 07:17:13.672118   11224 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-rc.1 exists
	I1210 07:17:13.672118   11224 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-rc.1" took 5.2282683s
	I1210 07:17:13.673137   11224 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 07:17:13.727139   11224 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-rc.1 exists
	I1210 07:17:13.727139   11224 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-rc.1" took 5.2832884s
	I1210 07:17:13.727139   11224 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 07:17:13.727139   11224 cache.go:87] Successfully saved all images to host disk.
	I1210 07:17:13.841122   11224 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-099700 --name no-preload-099700 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-099700 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-099700 --network no-preload-099700 --ip 192.168.103.2 --volume no-preload-099700:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 07:17:16.637672   11224 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-099700 --name no-preload-099700 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-099700 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-099700 --network no-preload-099700 --ip 192.168.103.2 --volume no-preload-099700:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f: (2.7965063s)
	I1210 07:17:16.642665   11224 cli_runner.go:164] Run: docker container inspect no-preload-099700 --format={{.State.Running}}
	I1210 07:17:16.718679   11224 cli_runner.go:164] Run: docker container inspect no-preload-099700 --format={{.State.Status}}
	I1210 07:17:16.796679   11224 cli_runner.go:164] Run: docker exec no-preload-099700 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:17:16.943669   11224 oci.go:144] the created container "no-preload-099700" has a running status.
	I1210 07:17:16.943669   11224 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-099700\id_rsa...
	I1210 07:17:17.100849   11224 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-099700\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:17:17.195857   11224 cli_runner.go:164] Run: docker container inspect no-preload-099700 --format={{.State.Status}}
	I1210 07:17:17.270870   11224 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:17:17.270870   11224 kic_runner.go:114] Args: [docker exec --privileged no-preload-099700 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:17:17.415879   11224 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-099700\id_rsa...
	I1210 07:17:19.917094   11224 cli_runner.go:164] Run: docker container inspect no-preload-099700 --format={{.State.Status}}
	I1210 07:17:19.974353   11224 machine.go:94] provisionDockerMachine start ...
	I1210 07:17:19.978371   11224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:17:20.035356   11224 main.go:143] libmachine: Using SSH client type: native
	I1210 07:17:20.049331   11224 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 56157 <nil> <nil>}
	I1210 07:17:20.049331   11224 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:17:20.237790   11224 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-099700
	
	I1210 07:17:20.237790   11224 ubuntu.go:182] provisioning hostname "no-preload-099700"
	I1210 07:17:20.241673   11224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:17:20.306988   11224 main.go:143] libmachine: Using SSH client type: native
	I1210 07:17:20.307988   11224 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 56157 <nil> <nil>}
	I1210 07:17:20.307988   11224 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-099700 && echo "no-preload-099700" | sudo tee /etc/hostname
	I1210 07:17:20.514859   11224 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-099700
	
	I1210 07:17:20.519602   11224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:17:20.590351   11224 main.go:143] libmachine: Using SSH client type: native
	I1210 07:17:20.590351   11224 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 56157 <nil> <nil>}
	I1210 07:17:20.590351   11224 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-099700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-099700/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-099700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:17:20.787100   11224 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:17:20.787100   11224 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 07:17:20.787100   11224 ubuntu.go:190] setting up certificates
	I1210 07:17:20.787100   11224 provision.go:84] configureAuth start
	I1210 07:17:20.791107   11224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-099700
	I1210 07:17:20.858875   11224 provision.go:143] copyHostCerts
	I1210 07:17:20.858875   11224 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 07:17:20.858875   11224 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 07:17:20.859883   11224 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 07:17:20.860880   11224 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 07:17:20.860880   11224 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 07:17:20.860880   11224 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 07:17:20.862875   11224 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 07:17:20.862875   11224 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 07:17:20.862875   11224 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 07:17:20.863872   11224 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-099700 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-099700]
	I1210 07:17:21.082880   11224 provision.go:177] copyRemoteCerts
	I1210 07:17:21.088871   11224 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:17:21.093871   11224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:17:21.149868   11224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56157 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-099700\id_rsa Username:docker}
	I1210 07:17:21.275265   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:17:21.303888   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:17:21.347386   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:17:21.378353   11224 provision.go:87] duration metric: took 591.2438ms to configureAuth
	I1210 07:17:21.378353   11224 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:17:21.379351   11224 config.go:182] Loaded profile config "no-preload-099700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:17:21.382346   11224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:17:21.448678   11224 main.go:143] libmachine: Using SSH client type: native
	I1210 07:17:21.448678   11224 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 56157 <nil> <nil>}
	I1210 07:17:21.448678   11224 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 07:17:21.641470   11224 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 07:17:21.641998   11224 ubuntu.go:71] root file system type: overlay
	I1210 07:17:21.642232   11224 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 07:17:21.646594   11224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:17:21.702939   11224 main.go:143] libmachine: Using SSH client type: native
	I1210 07:17:21.702939   11224 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 56157 <nil> <nil>}
	I1210 07:17:21.702939   11224 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 07:17:21.919022   11224 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 07:17:21.924944   11224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:17:21.982956   11224 main.go:143] libmachine: Using SSH client type: native
	I1210 07:17:21.982956   11224 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 56157 <nil> <nil>}
	I1210 07:17:21.982956   11224 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 07:17:23.369563   11224 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-10 07:17:21.910505571 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1210 07:17:23.369563   11224 machine.go:97] duration metric: took 3.3951577s to provisionDockerMachine
	I1210 07:17:23.369563   11224 client.go:176] duration metric: took 14.6068762s to LocalClient.Create
	I1210 07:17:23.369563   11224 start.go:167] duration metric: took 14.6068762s to libmachine.API.Create "no-preload-099700"
	I1210 07:17:23.369563   11224 start.go:293] postStartSetup for "no-preload-099700" (driver="docker")
	I1210 07:17:23.370697   11224 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:17:23.377366   11224 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:17:23.380647   11224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:17:23.440000   11224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56157 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-099700\id_rsa Username:docker}
	I1210 07:17:23.583822   11224 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:17:23.593698   11224 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:17:23.593732   11224 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:17:23.593783   11224 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 07:17:23.594130   11224 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 07:17:23.594668   11224 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 07:17:23.600045   11224 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:17:23.614430   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 07:17:23.651627   11224 start.go:296] duration metric: took 282.0588ms for postStartSetup
	I1210 07:17:23.657617   11224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-099700
	I1210 07:17:23.711608   11224 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\config.json ...
	I1210 07:17:23.720615   11224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:17:23.724610   11224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:17:23.780610   11224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56157 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-099700\id_rsa Username:docker}
	I1210 07:17:23.907816   11224 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:17:23.922001   11224 start.go:128] duration metric: took 15.1664607s to createHost
	I1210 07:17:23.922001   11224 start.go:83] releasing machines lock for "no-preload-099700", held for 15.1674601s
	I1210 07:17:23.927645   11224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-099700
	I1210 07:17:23.984345   11224 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 07:17:23.987341   11224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:17:23.988340   11224 ssh_runner.go:195] Run: cat /version.json
	I1210 07:17:23.991343   11224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:17:24.047349   11224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56157 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-099700\id_rsa Username:docker}
	I1210 07:17:24.049347   11224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56157 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-099700\id_rsa Username:docker}
	W1210 07:17:24.175399   11224 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 07:17:24.180396   11224 ssh_runner.go:195] Run: systemctl --version
	I1210 07:17:24.193403   11224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:17:24.202396   11224 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:17:24.205394   11224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:17:24.260083   11224 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 07:17:24.260083   11224 start.go:496] detecting cgroup driver to use...
	I1210 07:17:24.260083   11224 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:17:24.260083   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1210 07:17:24.276454   11224 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 07:17:24.276454   11224 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 07:17:24.293300   11224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:17:24.315246   11224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:17:24.332862   11224 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:17:24.339449   11224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:17:24.362506   11224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:17:24.382838   11224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:17:24.401327   11224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:17:24.425913   11224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:17:24.445917   11224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:17:24.465909   11224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:17:24.485913   11224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:17:24.503907   11224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:17:24.521915   11224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:17:24.543914   11224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:17:24.689705   11224 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:17:24.851404   11224 start.go:496] detecting cgroup driver to use...
	I1210 07:17:24.851404   11224 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:17:24.855405   11224 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 07:17:24.888159   11224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:17:24.911134   11224 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:17:24.963758   11224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:17:24.990006   11224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:17:25.008750   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:17:25.036952   11224 ssh_runner.go:195] Run: which cri-dockerd
	I1210 07:17:25.051216   11224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 07:17:25.065510   11224 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 07:17:25.094558   11224 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 07:17:25.268776   11224 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 07:17:25.414416   11224 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 07:17:25.414948   11224 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 07:17:25.448341   11224 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 07:17:25.474598   11224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:17:25.640791   11224 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 07:17:27.859361   11224 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.218424s)
	I1210 07:17:27.863664   11224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:17:27.887409   11224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 07:17:27.911807   11224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:17:27.942769   11224 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 07:17:28.101063   11224 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 07:17:28.264212   11224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:17:28.405475   11224 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 07:17:28.434224   11224 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 07:17:28.459967   11224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:17:28.680677   11224 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 07:17:28.794779   11224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:17:28.817484   11224 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 07:17:28.825700   11224 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 07:17:28.837948   11224 start.go:564] Will wait 60s for crictl version
	I1210 07:17:28.842191   11224 ssh_runner.go:195] Run: which crictl
	I1210 07:17:28.855567   11224 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:17:28.894825   11224 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 07:17:28.898331   11224 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:17:28.944521   11224 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:17:28.987106   11224 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.2 ...
	I1210 07:17:28.991540   11224 cli_runner.go:164] Run: docker exec -t no-preload-099700 dig +short host.docker.internal
	I1210 07:17:29.139621   11224 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 07:17:29.143618   11224 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 07:17:29.150986   11224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:17:29.171686   11224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:17:29.228529   11224 kubeadm.go:884] updating cluster {Name:no-preload-099700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-099700 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:17:29.229522   11224 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 07:17:29.232533   11224 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 07:17:29.262327   11224 docker.go:691] Got preloaded images: 
	I1210 07:17:29.262327   11224 docker.go:697] registry.k8s.io/kube-apiserver:v1.35.0-rc.1 wasn't preloaded
	I1210 07:17:29.262327   11224 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 07:17:29.271933   11224 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:17:29.275942   11224 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:17:29.279939   11224 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:17:29.279939   11224 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:17:29.283940   11224 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:17:29.284936   11224 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:17:29.287933   11224 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:17:29.288928   11224 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:17:29.292952   11224 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:17:29.292952   11224 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:17:29.296962   11224 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 07:17:29.297946   11224 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:17:29.301931   11224 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:17:29.302959   11224 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:17:29.305942   11224 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 07:17:29.310930   11224 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	W1210 07:17:29.343936   11224 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:17:29.391872   11224 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:17:29.452780   11224 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:17:29.502018   11224 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:17:29.564763   11224 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 07:17:29.596561   11224 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-rc.1
	W1210 07:17:29.619583   11224 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 07:17:29.630569   11224 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a" in container runtime
	I1210 07:17:29.630569   11224 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-rc.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-rc.1
	I1210 07:17:29.630569   11224 docker.go:338] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:17:29.634583   11224 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:17:29.656596   11224 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:17:29.672570   11224 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-rc.1
	I1210 07:17:29.677570   11224 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	W1210 07:17:29.678565   11224 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 07:17:29.687564   11224 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1210 07:17:29.687564   11224 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce" in container runtime
	I1210 07:17:29.687564   11224 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-rc.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-rc.1
	I1210 07:17:29.687564   11224 docker.go:338] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:17:29.687564   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (25791488 bytes)
	I1210 07:17:29.692573   11224 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:17:29.730583   11224 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	W1210 07:17:29.746586   11224 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 07:17:29.773708   11224 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 07:17:29.796447   11224 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-rc.1
	I1210 07:17:29.805115   11224 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 07:17:29.825545   11224 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614" in container runtime
	I1210 07:17:29.825601   11224 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-rc.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-rc.1
	I1210 07:17:29.825653   11224 docker.go:338] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:17:29.831056   11224 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:17:29.844691   11224 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 07:17:29.844691   11224 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:17:29.844691   11224 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:17:29.849678   11224 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:17:29.853674   11224 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:17:29.864697   11224 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1210 07:17:29.864697   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (27697152 bytes)
	I1210 07:17:29.870685   11224 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 07:17:29.914687   11224 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-rc.1
	I1210 07:17:29.920706   11224 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 07:17:29.929692   11224 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:17:29.936706   11224 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:17:29.940694   11224 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc" in container runtime
	I1210 07:17:29.940694   11224 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-rc.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-rc.1
	I1210 07:17:29.941691   11224 docker.go:338] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:17:29.944708   11224 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:17:29.945693   11224 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:17:29.953679   11224 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 07:17:29.953679   11224 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:17:29.953679   11224 docker.go:338] Removing image: registry.k8s.io/pause:3.10.1
	I1210 07:17:29.956682   11224 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1210 07:17:30.004676   11224 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1210 07:17:30.004676   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (23144960 bytes)
	I1210 07:17:30.030697   11224 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 07:17:30.031693   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 07:17:30.063687   11224 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-rc.1
	I1210 07:17:30.063687   11224 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1210 07:17:30.063687   11224 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1210 07:17:30.063687   11224 docker.go:338] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:17:30.069689   11224 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:17:30.069689   11224 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 07:17:30.079686   11224 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:17:30.084680   11224 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 07:17:30.099694   11224 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:17:30.222409   11224 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1210 07:17:30.222409   11224 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 07:17:30.222409   11224 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1210 07:17:30.222409   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 07:17:30.222409   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (17248256 bytes)
	I1210 07:17:30.228416   11224 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1210 07:17:30.261421   11224 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 07:17:30.261421   11224 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:17:30.261421   11224 docker.go:338] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:17:30.265418   11224 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:17:30.279405   11224 docker.go:305] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1210 07:17:30.279405   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 | docker load"
	I1210 07:17:30.363452   11224 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1210 07:17:30.363452   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1210 07:17:30.443435   11224 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:17:30.449439   11224 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:17:33.811078   11224 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.3615872s)
	I1210 07:17:33.811078   11224 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 07:17:33.812072   11224 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 | docker load": (3.5316189s)
	I1210 07:17:33.812072   11224 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-rc.1 from cache
	I1210 07:17:33.812072   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 07:17:33.812072   11224 docker.go:305] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 07:17:33.812072   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1210 07:17:33.968993   11224 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 from cache
	I1210 07:17:33.969995   11224 docker.go:305] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1210 07:17:33.969995   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 | docker load"
	I1210 07:17:38.368861   11224 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 | docker load": (4.3987982s)
	I1210 07:17:38.368861   11224 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-rc.1 from cache
	I1210 07:17:38.368861   11224 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:17:38.368861   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	I1210 07:17:42.088909   11224 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (3.7199909s)
	I1210 07:17:42.088909   11224 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1210 07:17:42.088909   11224 docker.go:305] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1210 07:17:42.088909   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 | docker load"
	I1210 07:17:43.549053   11224 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 | docker load": (1.4601213s)
	I1210 07:17:43.549053   11224 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-rc.1 from cache
	I1210 07:17:43.549053   11224 docker.go:305] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1210 07:17:43.549053   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 | docker load"
	I1210 07:17:44.598026   11224 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 | docker load": (1.0489563s)
	I1210 07:17:44.598026   11224 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-rc.1 from cache
	I1210 07:17:44.598553   11224 docker.go:305] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1210 07:17:44.598595   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load"
	I1210 07:17:46.253989   11224 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load": (1.6553688s)
	I1210 07:17:46.253989   11224 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 from cache
	I1210 07:17:46.253989   11224 docker.go:305] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:17:46.253989   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1210 07:17:46.877357   11224 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I1210 07:17:46.877357   11224 cache_images.go:125] Successfully loaded all cached images
	I1210 07:17:46.880294   11224 cache_images.go:94] duration metric: took 17.6176959s to LoadCachedImages
	I1210 07:17:46.880294   11224 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-rc.1 docker true true} ...
	I1210 07:17:46.880294   11224 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-099700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-099700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:17:46.885451   11224 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 07:17:46.971374   11224 cni.go:84] Creating CNI manager for ""
	I1210 07:17:46.971374   11224 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 07:17:46.971374   11224 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:17:46.971374   11224 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-099700 NodeName:no-preload-099700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:17:46.971967   11224 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-099700"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:17:46.976036   11224 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 07:17:46.989981   11224 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1210 07:17:46.996323   11224 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 07:17:47.010833   11224 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-rc.1/kubectl
	I1210 07:17:47.010833   11224 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-rc.1/kubelet
	I1210 07:17:47.010833   11224 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-rc.1/kubeadm
	I1210 07:17:48.309056   11224 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1210 07:17:48.319394   11224 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1210 07:17:48.319394   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (72368312 bytes)
	I1210 07:17:50.380047   11224 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1210 07:17:50.392330   11224 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1210 07:17:50.392964   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (58597560 bytes)
	I1210 07:17:50.667671   11224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:17:50.735673   11224 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1210 07:17:50.781678   11224 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1210 07:17:50.781678   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (58110244 bytes)
	I1210 07:17:51.535398   11224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:17:51.552953   11224 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1210 07:17:51.571620   11224 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 07:17:51.592572   11224 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1210 07:17:51.618127   11224 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:17:51.625647   11224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:17:51.646057   11224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:17:51.781543   11224 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:17:51.803553   11224 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700 for IP: 192.168.103.2
	I1210 07:17:51.803553   11224 certs.go:195] generating shared ca certs ...
	I1210 07:17:51.803553   11224 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:17:51.804122   11224 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 07:17:51.804551   11224 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 07:17:51.804598   11224 certs.go:257] generating profile certs ...
	I1210 07:17:51.804598   11224 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\client.key
	I1210 07:17:51.804598   11224 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\client.crt with IP's: []
	I1210 07:17:51.855751   11224 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\client.crt ...
	I1210 07:17:51.855751   11224 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\client.crt: {Name:mk01a60e1191060cb1c7866974401ee89e2663ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:17:51.857689   11224 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\client.key ...
	I1210 07:17:51.857689   11224 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\client.key: {Name:mkd5b4d1e67e9dd4122f34608f301914c2622bde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:17:51.859945   11224 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\apiserver.key.605fe1d0
	I1210 07:17:51.859945   11224 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\apiserver.crt.605fe1d0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1210 07:17:51.893591   11224 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\apiserver.crt.605fe1d0 ...
	I1210 07:17:51.893591   11224 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\apiserver.crt.605fe1d0: {Name:mka6e7fcb5a897269cd346181a776fa4e52e59a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:17:51.894491   11224 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\apiserver.key.605fe1d0 ...
	I1210 07:17:51.894491   11224 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\apiserver.key.605fe1d0: {Name:mk1b62a6f233ecd54c0a7272a8339394bd86f494 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:17:51.895371   11224 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\apiserver.crt.605fe1d0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\apiserver.crt
	I1210 07:17:51.922147   11224 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\apiserver.key.605fe1d0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\apiserver.key
	I1210 07:17:51.923013   11224 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\proxy-client.key
	I1210 07:17:51.923168   11224 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\proxy-client.crt with IP's: []
	I1210 07:17:52.240419   11224 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\proxy-client.crt ...
	I1210 07:17:52.240419   11224 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\proxy-client.crt: {Name:mk20700698d219aa58d54fc4822bb651afd39b0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:17:52.245419   11224 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\proxy-client.key ...
	I1210 07:17:52.245419   11224 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\proxy-client.key: {Name:mkad2dc3cf80a960fe0b32aabdc03df09216e0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:17:52.258396   11224 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 07:17:52.259340   11224 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 07:17:52.259340   11224 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 07:17:52.259687   11224 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 07:17:52.259907   11224 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 07:17:52.260139   11224 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 07:17:52.260367   11224 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 07:17:52.261463   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:17:52.291764   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:17:52.320891   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:17:52.346626   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:17:52.378130   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:17:52.410913   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:17:52.442051   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:17:52.468128   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:17:52.500956   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 07:17:52.533669   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 07:17:52.568600   11224 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:17:52.597319   11224 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:17:52.623485   11224 ssh_runner.go:195] Run: openssl version
	I1210 07:17:52.641060   11224 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 07:17:52.664060   11224 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 07:17:52.687113   11224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 07:17:52.697572   11224 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 07:17:52.700757   11224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 07:17:52.750355   11224 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:17:52.768708   11224 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11304.pem /etc/ssl/certs/51391683.0
	I1210 07:17:52.787240   11224 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 07:17:52.804693   11224 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 07:17:52.822244   11224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 07:17:52.832047   11224 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 07:17:52.836270   11224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 07:17:52.883611   11224 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:17:52.900140   11224 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/113042.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:17:52.917673   11224 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:17:52.934234   11224 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:17:52.951829   11224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:17:52.958856   11224 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:17:52.962848   11224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:17:53.010717   11224 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:17:53.029621   11224 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:17:53.050688   11224 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:17:53.060295   11224 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:17:53.061069   11224 kubeadm.go:401] StartCluster: {Name:no-preload-099700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-099700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:17:53.064782   11224 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 07:17:53.098486   11224 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:17:53.115498   11224 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:17:53.132357   11224 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:17:53.136519   11224 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:17:53.148607   11224 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:17:53.148607   11224 kubeadm.go:158] found existing configuration files:
	
	I1210 07:17:53.154020   11224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:17:53.167187   11224 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:17:53.170950   11224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:17:53.188178   11224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:17:53.201469   11224 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:17:53.205713   11224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:17:53.223321   11224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:17:53.237716   11224 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:17:53.245225   11224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:17:53.265450   11224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:17:53.278909   11224 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:17:53.283421   11224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:17:53.302203   11224 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:17:53.366031   11224 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 07:17:53.366031   11224 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:17:53.545164   11224 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:17:53.545796   11224 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 07:17:53.545857   11224 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 07:17:53.545883   11224 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 07:17:53.545883   11224 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 07:17:53.545883   11224 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 07:17:53.545883   11224 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 07:17:53.548295   11224 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 07:17:53.548295   11224 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 07:17:53.548295   11224 kubeadm.go:319] CONFIG_INET: enabled
	I1210 07:17:53.548295   11224 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 07:17:53.548295   11224 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 07:17:53.548822   11224 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 07:17:53.548946   11224 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 07:17:53.549113   11224 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 07:17:53.549266   11224 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 07:17:53.549420   11224 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 07:17:53.549533   11224 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 07:17:53.549684   11224 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 07:17:53.549789   11224 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 07:17:53.549883   11224 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 07:17:53.549883   11224 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 07:17:53.549883   11224 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 07:17:53.549883   11224 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 07:17:53.549883   11224 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 07:17:53.549883   11224 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 07:17:53.550419   11224 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 07:17:53.550505   11224 kubeadm.go:319] OS: Linux
	I1210 07:17:53.550583   11224 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:17:53.550655   11224 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:17:53.550708   11224 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:17:53.550708   11224 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:17:53.550708   11224 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:17:53.550708   11224 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:17:53.550708   11224 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:17:53.550708   11224 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:17:53.551303   11224 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:17:53.647973   11224 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:17:53.647973   11224 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:17:53.648665   11224 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:18:02.579478   11224 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:18:02.583170   11224 out.go:252]   - Generating certificates and keys ...
	I1210 07:18:02.583170   11224 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:18:02.583170   11224 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:18:02.682249   11224 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:18:02.750025   11224 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:18:02.895523   11224 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:18:03.002037   11224 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:18:03.048034   11224 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:18:03.049042   11224 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-099700] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1210 07:18:03.133084   11224 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:18:03.133084   11224 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-099700] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1210 07:18:03.340600   11224 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:18:03.690684   11224 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:18:04.120533   11224 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:18:04.120533   11224 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:18:04.580143   11224 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:18:04.692026   11224 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:18:04.789910   11224 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:18:04.948137   11224 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:18:05.096283   11224 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:18:05.152191   11224 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:18:05.158491   11224 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:18:05.374614   11224 out.go:252]   - Booting up control plane ...
	I1210 07:18:05.374868   11224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:18:05.374868   11224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:18:05.375549   11224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:18:05.375549   11224 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:18:05.376103   11224 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:18:05.376370   11224 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:18:05.376370   11224 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:18:05.376370   11224 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:18:05.405421   11224 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:18:05.405692   11224 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:22:05.386549   11224 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000573142s
	I1210 07:22:05.386631   11224 kubeadm.go:319] 
	I1210 07:22:05.386631   11224 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:22:05.386631   11224 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:22:05.386631   11224 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:22:05.387175   11224 kubeadm.go:319] 
	I1210 07:22:05.387496   11224 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:22:05.387704   11224 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:22:05.387814   11224 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:22:05.387814   11224 kubeadm.go:319] 
	I1210 07:22:05.392001   11224 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 07:22:05.393253   11224 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:22:05.393772   11224 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:22:05.394316   11224 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 07:22:05.394316   11224 kubeadm.go:319] 
	I1210 07:22:05.394316   11224 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:22:05.395078   11224 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-099700] and IPs [192.168.103.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-099700] and IPs [192.168.103.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000573142s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-099700] and IPs [192.168.103.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-099700] and IPs [192.168.103.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000573142s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:22:05.399224   11224 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1210 07:22:05.857058   11224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:22:05.875747   11224 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:22:05.880314   11224 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:22:05.894930   11224 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:22:05.894930   11224 kubeadm.go:158] found existing configuration files:
	
	I1210 07:22:05.898930   11224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:22:05.912931   11224 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:22:05.915933   11224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:22:05.933934   11224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:22:05.946945   11224 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:22:05.950927   11224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:22:05.968952   11224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:22:05.981928   11224 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:22:05.985928   11224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:22:06.001929   11224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:22:06.013923   11224 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:22:06.017923   11224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:22:06.033921   11224 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:22:06.157011   11224 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 07:22:06.245530   11224 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:22:06.355521   11224 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:26:07.100531   11224 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:26:07.100645   11224 kubeadm.go:319] 
	I1210 07:26:07.100914   11224 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:26:07.107830   11224 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 07:26:07.107830   11224 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:26:07.109416   11224 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:26:07.109416   11224 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_INET: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] OS: Linux
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:26:07.113996   11224 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:26:07.113996   11224 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:26:07.113996   11224 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:26:07.113996   11224 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:26:07.113996   11224 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:26:07.115992   11224 out.go:252]   - Generating certificates and keys ...
	I1210 07:26:07.115992   11224 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:26:07.117997   11224 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:26:07.117997   11224 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:26:07.117997   11224 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:26:07.117997   11224 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:26:07.121990   11224 out.go:252]   - Booting up control plane ...
	I1210 07:26:07.122992   11224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:26:07.122992   11224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:26:07.122992   11224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:26:07.122992   11224 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:26:07.122992   11224 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:26:07.122992   11224 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:26:07.123991   11224 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:26:07.123991   11224 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:26:07.123991   11224 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:26:07.123991   11224 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:26:07.123991   11224 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000660194s
	I1210 07:26:07.123991   11224 kubeadm.go:319] 
	I1210 07:26:07.123991   11224 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:26:07.123991   11224 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:26:07.124990   11224 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:26:07.124990   11224 kubeadm.go:319] 
	I1210 07:26:07.124990   11224 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:26:07.124990   11224 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:26:07.124990   11224 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:26:07.124990   11224 kubeadm.go:319] 
	I1210 07:26:07.124990   11224 kubeadm.go:403] duration metric: took 8m14.0562387s to StartCluster
	I1210 07:26:07.124990   11224 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:26:07.128999   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:26:07.189549   11224 cri.go:89] found id: ""
	I1210 07:26:07.189549   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.190547   11224 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:26:07.190547   11224 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:26:07.193548   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:26:07.244335   11224 cri.go:89] found id: ""
	I1210 07:26:07.244335   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.244335   11224 logs.go:284] No container was found matching "etcd"
	I1210 07:26:07.244335   11224 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:26:07.248555   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:26:07.295451   11224 cri.go:89] found id: ""
	I1210 07:26:07.295451   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.295451   11224 logs.go:284] No container was found matching "coredns"
	I1210 07:26:07.295451   11224 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:26:07.299449   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:26:07.346456   11224 cri.go:89] found id: ""
	I1210 07:26:07.346456   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.346456   11224 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:26:07.346456   11224 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:26:07.352449   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:26:07.400714   11224 cri.go:89] found id: ""
	I1210 07:26:07.400714   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.400714   11224 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:26:07.400714   11224 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:26:07.406617   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:26:07.469611   11224 cri.go:89] found id: ""
	I1210 07:26:07.469611   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.469611   11224 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:26:07.469611   11224 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:26:07.473612   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:26:07.521612   11224 cri.go:89] found id: ""
	I1210 07:26:07.521612   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.521612   11224 logs.go:284] No container was found matching "kindnet"
	I1210 07:26:07.521612   11224 logs.go:123] Gathering logs for Docker ...
	I1210 07:26:07.521612   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:26:07.551610   11224 logs.go:123] Gathering logs for container status ...
	I1210 07:26:07.552612   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:26:07.608708   11224 logs.go:123] Gathering logs for kubelet ...
	I1210 07:26:07.608708   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:26:07.689194   11224 logs.go:123] Gathering logs for dmesg ...
	I1210 07:26:07.689194   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:26:07.734619   11224 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:26:07.734619   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:26:07.823677   11224 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:26:07.814275   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.815474   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.816551   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.817524   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.818265   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:26:07.814275   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.815474   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.816551   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.817524   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.818265   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:26:07.823677   11224 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000660194s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:26:07.823677   11224 out.go:285] * 
	* 
	W1210 07:26:07.823677   11224 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000660194s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000660194s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:26:07.823677   11224 out.go:285] * 
	* 
	W1210 07:26:07.825673   11224 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:26:07.830674   11224 out.go:203] 
	W1210 07:26:07.833685   11224 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000660194s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000660194s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:26:07.833685   11224 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:26:07.833685   11224 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:26:07.837675   11224 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p no-preload-099700 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-099700
helpers_test.go:244: (dbg) docker inspect no-preload-099700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11",
	        "Created": "2025-12-10T07:17:13.908925425Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 372361,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:17:16.221120749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/hostname",
	        "HostsPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/hosts",
	        "LogPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11-json.log",
	        "Name": "/no-preload-099700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-099700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-099700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-099700",
	                "Source": "/var/lib/docker/volumes/no-preload-099700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-099700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-099700",
	                "name.minikube.sigs.k8s.io": "no-preload-099700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "19d075be822285a6bc04718614fae0d1e6b527c2b7b973ed840dd03da78703c1",
	            "SandboxKey": "/var/run/docker/netns/19d075be8222",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56157"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56153"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56154"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56155"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56156"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-099700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "19fb5b7ebc44993ca33ebb33ab9b189e482cb385e465c509a613326e2c10eb7e",
	                    "EndpointID": "bd211c76c769a23696ddb9b2e4a3cd1f6c2388bff504ec060a8ffe809e64dcb5",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-099700",
	                        "a93123bad589"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-099700 -n no-preload-099700
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-099700 -n no-preload-099700: exit status 6 (697.9923ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:26:09.032350    2220 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-099700" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-099700 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-099700 logs -n 25: (1.2387194s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                     │          PROFILE          │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-648600 sudo systemctl cat kubelet --no-pager                                                      │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo journalctl -xeu kubelet --all --full --no-pager                                       │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /etc/kubernetes/kubelet.conf                                                      │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /var/lib/kubelet/config.yaml                                                      │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl status docker --all --full --no-pager                                       │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl cat docker --no-pager                                                       │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /etc/docker/daemon.json                                                           │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo docker system info                                                                    │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl status cri-docker --all --full --no-pager                                   │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl cat cri-docker --no-pager                                                   │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                              │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /usr/lib/systemd/system/cri-docker.service                                        │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cri-dockerd --version                                                                 │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl status containerd --all --full --no-pager                                   │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl cat containerd --no-pager                                                   │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /lib/systemd/system/containerd.service                                            │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /etc/containerd/config.toml                                                       │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo containerd config dump                                                                │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl status crio --all --full --no-pager                                         │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │                     │
	│ ssh     │ -p flannel-648600 sudo systemctl cat crio --no-pager                                                         │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                               │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo crio config                                                                           │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ delete  │ -p flannel-648600                                                                                            │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ start   │ -p bridge-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker │ bridge-648600             │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │                     │
	│ ssh     │ -p enable-default-cni-648600 pgrep -a kubelet                                                                │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:25:49
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:25:49.543159    4804 out.go:360] Setting OutFile to fd 1260 ...
	I1210 07:25:49.586332    4804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:25:49.586332    4804 out.go:374] Setting ErrFile to fd 812...
	I1210 07:25:49.586377    4804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:25:49.601444    4804 out.go:368] Setting JSON to false
	I1210 07:25:49.603301    4804 start.go:133] hostinfo: {"hostname":"minikube4","uptime":10481,"bootTime":1765341068,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 07:25:49.603301    4804 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 07:25:49.607247    4804 out.go:179] * [bridge-648600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 07:25:49.610906    4804 notify.go:221] Checking for updates...
	I1210 07:25:49.613490    4804 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:25:49.615618    4804 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:25:49.617459    4804 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 07:25:49.620105    4804 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:25:49.622698    4804 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1210 07:25:47.110970   10052 pod_ready.go:104] pod "coredns-66bc5c9577-snb42" is not "Ready", error: <nil>
	W1210 07:25:49.622010   10052 pod_ready.go:104] pod "coredns-66bc5c9577-snb42" is not "Ready", error: <nil>
	I1210 07:25:49.625061    4804 config.go:182] Loaded profile config "enable-default-cni-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:25:49.625872    4804 config.go:182] Loaded profile config "newest-cni-525200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:25:49.626037    4804 config.go:182] Loaded profile config "no-preload-099700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:25:49.626037    4804 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:25:49.756585    4804 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 07:25:49.760223    4804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:25:49.995247    4804 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:25:49.978486557 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:25:49.998261    4804 out.go:179] * Using the docker driver based on user configuration
	I1210 07:25:50.001264    4804 start.go:309] selected driver: docker
	I1210 07:25:50.002267    4804 start.go:927] validating driver "docker" against <nil>
	I1210 07:25:50.002267    4804 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:25:50.087841    4804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:25:50.326740    4804 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:25:50.304007932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:25:50.326740    4804 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:25:50.328404    4804 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:25:50.338396    4804 out.go:179] * Using Docker Desktop driver with root privileges
	I1210 07:25:50.340335    4804 cni.go:84] Creating CNI manager for "bridge"
	I1210 07:25:50.340335    4804 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 07:25:50.340335    4804 start.go:353] cluster config:
	{Name:bridge-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:bridge-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:25:50.343283    4804 out.go:179] * Starting "bridge-648600" primary control-plane node in "bridge-648600" cluster
	I1210 07:25:50.346532    4804 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 07:25:50.348744    4804 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:25:50.351187    4804 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:25:50.351187    4804 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 07:25:50.394442    4804 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:25:50.434159    4804 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:25:50.434159    4804 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:25:50.622000    4804 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:25:50.622000    4804 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-648600\config.json ...
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:25:50.622000    4804 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-648600\config.json: {Name:mkda6ce656f671ed6502f97ceabe139018dc3485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:25:50.623233    4804 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:25:50.623233    4804 start.go:360] acquireMachinesLock for bridge-648600: {Name:mk22986727a0b030c8919e2ba8ce1cc03f255d27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:50.623828    4804 start.go:364] duration metric: took 594.9µs to acquireMachinesLock for "bridge-648600"
	I1210 07:25:50.624001    4804 start.go:93] Provisioning new machine with config: &{Name:bridge-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:bridge-648600 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:25:50.624086    4804 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:25:50.630473    4804 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:25:50.631176    4804 start.go:159] libmachine.API.Create for "bridge-648600" (driver="docker")
	I1210 07:25:50.631275    4804 client.go:173] LocalClient.Create starting
	I1210 07:25:50.631359    4804 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1210 07:25:50.631951    4804 main.go:143] libmachine: Decoding PEM data...
	I1210 07:25:50.631985    4804 main.go:143] libmachine: Parsing certificate...
	I1210 07:25:50.632047    4804 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1210 07:25:50.632047    4804 main.go:143] libmachine: Decoding PEM data...
	I1210 07:25:50.632047    4804 main.go:143] libmachine: Parsing certificate...
	I1210 07:25:50.637892    4804 cli_runner.go:164] Run: docker network inspect bridge-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:25:50.766370    4804 cli_runner.go:211] docker network inspect bridge-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:25:50.773371    4804 network_create.go:284] running [docker network inspect bridge-648600] to gather additional debugging logs...
	I1210 07:25:50.773371    4804 cli_runner.go:164] Run: docker network inspect bridge-648600
	W1210 07:25:50.940187    4804 cli_runner.go:211] docker network inspect bridge-648600 returned with exit code 1
	I1210 07:25:50.940187    4804 network_create.go:287] error running [docker network inspect bridge-648600]: docker network inspect bridge-648600: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network bridge-648600 not found
	I1210 07:25:50.940187    4804 network_create.go:289] output of [docker network inspect bridge-648600]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network bridge-648600 not found
	
	** /stderr **
	I1210 07:25:50.943184    4804 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:25:51.023198    4804 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.067123    4804 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.118311    4804 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.415711    4804 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.593726    4804 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.749810    4804 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.785598    4804 network.go:209] skipping subnet 192.168.103.0/24 that is reserved: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.819602    4804 network.go:206] using free private subnet 192.168.112.0/24: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cc7020}
	I1210 07:25:51.819602    4804 network_create.go:124] attempt to create docker network bridge-648600 192.168.112.0/24 with gateway 192.168.112.1 and MTU of 1500 ...
	I1210 07:25:51.824606    4804 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.112.0/24 --gateway=192.168.112.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-648600 bridge-648600
	I1210 07:25:52.457680    4804 network_create.go:108] docker network bridge-648600 192.168.112.0/24 created
	I1210 07:25:52.457680    4804 kic.go:121] calculated static IP "192.168.112.2" for the "bridge-648600" container
	I1210 07:25:52.468991    4804 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:25:52.567040    4804 cli_runner.go:164] Run: docker volume create bridge-648600 --label name.minikube.sigs.k8s.io=bridge-648600 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:25:52.654855    4804 oci.go:103] Successfully created a docker volume bridge-648600
	I1210 07:25:52.661290    4804 cli_runner.go:164] Run: docker run --rm --name bridge-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-648600 --entrypoint /usr/bin/test -v bridge-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 07:25:53.606607    4804 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.606650    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1210 07:25:53.606650    4804 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 2.9846033s
	I1210 07:25:53.606650    4804 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1210 07:25:53.611255    4804 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.611255    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1210 07:25:53.611255    4804 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 2.9892082s
	I1210 07:25:53.611255    4804 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1210 07:25:53.618257    4804 cache.go:107] acquiring lock: {Name:mk4347e5873954a8abe84535c3dbf0018de433f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.618257    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 exists
	I1210 07:25:53.618257    4804 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.34.3" took 2.9962103s
	I1210 07:25:53.618257    4804 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 succeeded
	I1210 07:25:53.622270    4804 cache.go:107] acquiring lock: {Name:mk6b392ee3857c8c549222e5d4bc0d459f0d6374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.622270    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 exists
	I1210 07:25:53.623277    4804 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.12.1" took 3.0012304s
	I1210 07:25:53.623277    4804 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 succeeded
	I1210 07:25:53.639040    4804 cache.go:107] acquiring lock: {Name:mk712909f8cebe825fb8a435a44c90c8d2ca7c32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.639270    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 exists
	I1210 07:25:53.639270    4804 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.34.3" took 3.0172233s
	I1210 07:25:53.639270    4804 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 succeeded
	I1210 07:25:53.654496    4804 cache.go:107] acquiring lock: {Name:mkc3ba12d2dc6e754bbb22b72e061ebdaf85fee6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.654560    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 exists
	I1210 07:25:53.654560    4804 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.34.3" took 3.0325133s
	I1210 07:25:53.654560    4804 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 succeeded
	I1210 07:25:53.657375    4804 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.657375    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1210 07:25:53.657375    4804 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.0353279s
	I1210 07:25:53.657375    4804 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1210 07:25:53.721903    4804 cache.go:107] acquiring lock: {Name:mk2c17fa70a505b9214cef4e32cab64efc0821c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.722919    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 exists
	I1210 07:25:53.722919    4804 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.34.3" took 3.1008707s
	I1210 07:25:53.722919    4804 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 succeeded
	I1210 07:25:53.722919    4804 cache.go:87] Successfully saved all images to host disk.
	I1210 07:25:54.341687    4804 cli_runner.go:217] Completed: docker run --rm --name bridge-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-648600 --entrypoint /usr/bin/test -v bridge-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.6803432s)
	I1210 07:25:54.341687    4804 oci.go:107] Successfully prepared a docker volume bridge-648600
	I1210 07:25:54.341687    4804 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:25:54.345933    4804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	W1210 07:25:51.668965   10052 pod_ready.go:104] pod "coredns-66bc5c9577-snb42" is not "Ready", error: <nil>
	W1210 07:25:54.108193   10052 pod_ready.go:104] pod "coredns-66bc5c9577-snb42" is not "Ready", error: <nil>
	I1210 07:25:55.617983   10052 pod_ready.go:94] pod "coredns-66bc5c9577-snb42" is "Ready"
	I1210 07:25:55.617983   10052 pod_ready.go:86] duration metric: took 32.0203634s for pod "coredns-66bc5c9577-snb42" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.617983   10052 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z85xd" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.622647   10052 pod_ready.go:99] pod "coredns-66bc5c9577-z85xd" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-z85xd" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-z85xd" not found
	I1210 07:25:55.622692   10052 pod_ready.go:86] duration metric: took 4.7083ms for pod "coredns-66bc5c9577-z85xd" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.628956   10052 pod_ready.go:83] waiting for pod "etcd-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.641372   10052 pod_ready.go:94] pod "etcd-enable-default-cni-648600" is "Ready"
	I1210 07:25:55.641424   10052 pod_ready.go:86] duration metric: took 12.4205ms for pod "etcd-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.649373   10052 pod_ready.go:83] waiting for pod "kube-apiserver-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.660931   10052 pod_ready.go:94] pod "kube-apiserver-enable-default-cni-648600" is "Ready"
	I1210 07:25:55.660931   10052 pod_ready.go:86] duration metric: took 11.513ms for pod "kube-apiserver-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.665948   10052 pod_ready.go:83] waiting for pod "kube-controller-manager-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:56.004282   10052 pod_ready.go:94] pod "kube-controller-manager-enable-default-cni-648600" is "Ready"
	I1210 07:25:56.004282   10052 pod_ready.go:86] duration metric: took 338.3283ms for pod "kube-controller-manager-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:56.204210   10052 pod_ready.go:83] waiting for pod "kube-proxy-vbl22" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:56.604378   10052 pod_ready.go:94] pod "kube-proxy-vbl22" is "Ready"
	I1210 07:25:56.604904   10052 pod_ready.go:86] duration metric: took 400.6871ms for pod "kube-proxy-vbl22" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:56.803854   10052 pod_ready.go:83] waiting for pod "kube-scheduler-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:57.202693   10052 pod_ready.go:94] pod "kube-scheduler-enable-default-cni-648600" is "Ready"
	I1210 07:25:57.203218   10052 pod_ready.go:86] duration metric: took 399.2547ms for pod "kube-scheduler-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:57.203218   10052 pod_ready.go:40] duration metric: took 33.6115798s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:25:57.296715   10052 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 07:25:57.302628   10052 out.go:179] * Done! kubectl is now configured to use "enable-default-cni-648600" cluster and "default" namespace by default
	I1210 07:25:54.612936    4804 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:25:54.590365523 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:25:54.615934    4804 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:25:54.861212    4804 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-648600 --name bridge-648600 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-648600 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-648600 --network bridge-648600 --ip 192.168.112.2 --volume bridge-648600:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 07:25:55.596152    4804 cli_runner.go:164] Run: docker container inspect bridge-648600 --format={{.State.Running}}
	I1210 07:25:55.671931    4804 cli_runner.go:164] Run: docker container inspect bridge-648600 --format={{.State.Status}}
	I1210 07:25:55.726932    4804 cli_runner.go:164] Run: docker exec bridge-648600 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:25:55.835842    4804 oci.go:144] the created container "bridge-648600" has a running status.
	I1210 07:25:55.835842    4804 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa...
	I1210 07:25:55.990727    4804 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:25:56.069551    4804 cli_runner.go:164] Run: docker container inspect bridge-648600 --format={{.State.Status}}
	I1210 07:25:56.135549    4804 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:25:56.135549    4804 kic_runner.go:114] Args: [docker exec --privileged bridge-648600 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:25:56.296490    4804 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa...
	I1210 07:25:58.538165    4804 cli_runner.go:164] Run: docker container inspect bridge-648600 --format={{.State.Status}}
	I1210 07:25:58.610699    4804 machine.go:94] provisionDockerMachine start ...
	I1210 07:25:58.614716    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:25:58.671691    4804 main.go:143] libmachine: Using SSH client type: native
	I1210 07:25:58.684691    4804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57145 <nil> <nil>}
	I1210 07:25:58.684691    4804 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:25:58.854993    4804 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-648600
	
	I1210 07:25:58.855026    4804 ubuntu.go:182] provisioning hostname "bridge-648600"
	I1210 07:25:58.858622    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:25:58.909867    4804 main.go:143] libmachine: Using SSH client type: native
	I1210 07:25:58.910872    4804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57145 <nil> <nil>}
	I1210 07:25:58.910872    4804 main.go:143] libmachine: About to run SSH command:
	sudo hostname bridge-648600 && echo "bridge-648600" | sudo tee /etc/hostname
	I1210 07:25:59.133481    4804 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-648600
	
	I1210 07:25:59.139277    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:25:59.193639    4804 main.go:143] libmachine: Using SSH client type: native
	I1210 07:25:59.194659    4804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57145 <nil> <nil>}
	I1210 07:25:59.194659    4804 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-648600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-648600/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-648600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:25:59.366643    4804 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:25:59.366643    4804 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 07:25:59.366643    4804 ubuntu.go:190] setting up certificates
	I1210 07:25:59.366643    4804 provision.go:84] configureAuth start
	I1210 07:25:59.372569    4804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-648600
	I1210 07:25:59.424305    4804 provision.go:143] copyHostCerts
	I1210 07:25:59.424305    4804 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 07:25:59.424305    4804 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 07:25:59.425310    4804 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 07:25:59.426315    4804 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 07:25:59.426315    4804 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 07:25:59.426315    4804 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 07:25:59.426315    4804 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 07:25:59.426315    4804 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 07:25:59.427309    4804 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 07:25:59.428305    4804 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.bridge-648600 san=[127.0.0.1 192.168.112.2 bridge-648600 localhost minikube]
	I1210 07:25:59.609649    4804 provision.go:177] copyRemoteCerts
	I1210 07:25:59.612966    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:25:59.616449    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:25:59.669935    4804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57145 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa Username:docker}
	I1210 07:25:59.791998    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:25:59.820476    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 07:25:59.846719    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:25:59.877046    4804 provision.go:87] duration metric: took 510.3942ms to configureAuth
	I1210 07:25:59.877077    4804 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:25:59.877619    4804 config.go:182] Loaded profile config "bridge-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:25:59.880641    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:25:59.942142    4804 main.go:143] libmachine: Using SSH client type: native
	I1210 07:25:59.942142    4804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57145 <nil> <nil>}
	I1210 07:25:59.942142    4804 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 07:26:00.118053    4804 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 07:26:00.118118    4804 ubuntu.go:71] root file system type: overlay
	I1210 07:26:00.118212    4804 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 07:26:00.123385    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:00.181410    4804 main.go:143] libmachine: Using SSH client type: native
	I1210 07:26:00.181982    4804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57145 <nil> <nil>}
	I1210 07:26:00.181982    4804 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 07:26:00.393612    4804 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 07:26:00.397347    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:00.457089    4804 main.go:143] libmachine: Using SSH client type: native
	I1210 07:26:00.457167    4804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57145 <nil> <nil>}
	I1210 07:26:00.457167    4804 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 07:26:01.934057    4804 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-10 07:26:00.376152452 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1210 07:26:01.934057    4804 machine.go:97] duration metric: took 3.3233058s to provisionDockerMachine
	I1210 07:26:01.934057    4804 client.go:176] duration metric: took 11.302605s to LocalClient.Create
	I1210 07:26:01.934057    4804 start.go:167] duration metric: took 11.3027041s to libmachine.API.Create "bridge-648600"
	I1210 07:26:01.934594    4804 start.go:293] postStartSetup for "bridge-648600" (driver="docker")
	I1210 07:26:01.934692    4804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:26:01.942235    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:26:01.945040    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:02.000544    4804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57145 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa Username:docker}
	I1210 07:26:02.145056    4804 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:26:02.153062    4804 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:26:02.153062    4804 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:26:02.153062    4804 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 07:26:02.154054    4804 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 07:26:02.154054    4804 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 07:26:02.160050    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:26:02.177064    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 07:26:02.217059    4804 start.go:296] duration metric: took 282.4039ms for postStartSetup
	I1210 07:26:02.224068    4804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-648600
	I1210 07:26:02.300072    4804 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-648600\config.json ...
	I1210 07:26:02.307063    4804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:26:02.311068    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:02.374070    4804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57145 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa Username:docker}
	I1210 07:26:02.523061    4804 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:26:02.533071    4804 start.go:128] duration metric: took 11.9087995s to createHost
	I1210 07:26:02.533071    4804 start.go:83] releasing machines lock for "bridge-648600", held for 11.9089951s
	I1210 07:26:02.538055    4804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-648600
	I1210 07:26:02.606067    4804 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 07:26:02.611065    4804 ssh_runner.go:195] Run: cat /version.json
	I1210 07:26:02.611065    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:02.615063    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:02.672078    4804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57145 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa Username:docker}
	I1210 07:26:02.673066    4804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57145 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa Username:docker}
	W1210 07:26:02.794076    4804 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 07:26:02.799071    4804 ssh_runner.go:195] Run: systemctl --version
	I1210 07:26:02.818066    4804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:26:02.829076    4804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:26:02.835123    4804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:26:02.893077    4804 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 07:26:02.893077    4804 start.go:496] detecting cgroup driver to use...
	I1210 07:26:02.893077    4804 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:26:02.894067    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1210 07:26:02.911072    4804 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 07:26:02.911072    4804 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 07:26:02.931084    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:26:02.958066    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:26:02.978070    4804 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:26:02.983075    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:26:03.006095    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:26:03.029063    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:26:03.051070    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:26:03.073080    4804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:26:03.101275    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:26:03.129493    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:26:03.156512    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:26:03.180504    4804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:26:03.204499    4804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:26:03.229498    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:26:03.450069    4804 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:26:03.635076    4804 start.go:496] detecting cgroup driver to use...
	I1210 07:26:03.635076    4804 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:26:03.641073    4804 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 07:26:03.669080    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:26:03.696079    4804 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:26:03.759984    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:26:03.783974    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:26:03.803590    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:26:03.837786    4804 ssh_runner.go:195] Run: which cri-dockerd
	I1210 07:26:03.848792    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 07:26:03.865545    4804 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 07:26:03.905470    4804 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 07:26:04.086477    4804 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 07:26:04.248470    4804 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 07:26:04.248470    4804 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 07:26:04.276479    4804 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 07:26:04.305474    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:26:04.461490    4804 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 07:26:07.100531   11224 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:26:07.100645   11224 kubeadm.go:319] 
	I1210 07:26:07.100914   11224 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:26:07.107830   11224 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 07:26:07.107830   11224 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:26:07.109416   11224 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:26:07.109416   11224 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_INET: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] OS: Linux
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:26:07.113996   11224 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:26:07.113996   11224 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:26:07.113996   11224 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:26:07.113996   11224 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:26:07.113996   11224 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:26:07.115992   11224 out.go:252]   - Generating certificates and keys ...
	I1210 07:26:07.115992   11224 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:26:07.117997   11224 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:26:07.117997   11224 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:26:07.117997   11224 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:26:07.117997   11224 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:26:06.143507    4804 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6819903s)
	I1210 07:26:06.148172    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:26:06.173866    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 07:26:06.199939    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:26:06.223738    4804 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 07:26:06.369886    4804 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 07:26:06.510578    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:26:06.651893    4804 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 07:26:06.680901    4804 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 07:26:06.708083    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:26:06.853347    4804 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 07:26:06.965850    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:26:06.985257    4804 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 07:26:06.989257    4804 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 07:26:06.996250    4804 start.go:564] Will wait 60s for crictl version
	I1210 07:26:07.000258    4804 ssh_runner.go:195] Run: which crictl
	I1210 07:26:07.012023    4804 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:26:07.058889    4804 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 07:26:07.063603    4804 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:26:07.115992    4804 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:26:07.121990   11224 out.go:252]   - Booting up control plane ...
	I1210 07:26:07.122992   11224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:26:07.122992   11224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:26:07.122992   11224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:26:07.122992   11224 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:26:07.122992   11224 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:26:07.122992   11224 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:26:07.123991   11224 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:26:07.123991   11224 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:26:07.123991   11224 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:26:07.123991   11224 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:26:07.123991   11224 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000660194s
	I1210 07:26:07.123991   11224 kubeadm.go:319] 
	I1210 07:26:07.123991   11224 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:26:07.123991   11224 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:26:07.124990   11224 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:26:07.124990   11224 kubeadm.go:319] 
	I1210 07:26:07.124990   11224 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:26:07.124990   11224 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:26:07.124990   11224 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:26:07.124990   11224 kubeadm.go:319] 
	I1210 07:26:07.124990   11224 kubeadm.go:403] duration metric: took 8m14.0562387s to StartCluster
	I1210 07:26:07.124990   11224 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:26:07.128999   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:26:07.189549   11224 cri.go:89] found id: ""
	I1210 07:26:07.189549   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.190547   11224 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:26:07.190547   11224 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:26:07.193548   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:26:07.244335   11224 cri.go:89] found id: ""
	I1210 07:26:07.244335   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.244335   11224 logs.go:284] No container was found matching "etcd"
	I1210 07:26:07.244335   11224 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:26:07.248555   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:26:07.295451   11224 cri.go:89] found id: ""
	I1210 07:26:07.295451   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.295451   11224 logs.go:284] No container was found matching "coredns"
	I1210 07:26:07.295451   11224 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:26:07.299449   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:26:07.346456   11224 cri.go:89] found id: ""
	I1210 07:26:07.346456   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.346456   11224 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:26:07.346456   11224 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:26:07.352449   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:26:07.400714   11224 cri.go:89] found id: ""
	I1210 07:26:07.400714   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.400714   11224 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:26:07.400714   11224 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:26:07.406617   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:26:07.469611   11224 cri.go:89] found id: ""
	I1210 07:26:07.469611   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.469611   11224 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:26:07.469611   11224 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:26:07.473612   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:26:07.521612   11224 cri.go:89] found id: ""
	I1210 07:26:07.521612   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.521612   11224 logs.go:284] No container was found matching "kindnet"
	I1210 07:26:07.521612   11224 logs.go:123] Gathering logs for Docker ...
	I1210 07:26:07.521612   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:26:07.551610   11224 logs.go:123] Gathering logs for container status ...
	I1210 07:26:07.552612   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:26:07.608708   11224 logs.go:123] Gathering logs for kubelet ...
	I1210 07:26:07.608708   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:26:07.689194   11224 logs.go:123] Gathering logs for dmesg ...
	I1210 07:26:07.689194   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:26:07.734619   11224 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:26:07.734619   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:26:07.823677   11224 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:26:07.814275   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.815474   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.816551   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.817524   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.818265   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:26:07.814275   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.815474   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.816551   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.817524   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.818265   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:26:07.823677   11224 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000660194s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:26:07.823677   11224 out.go:285] * 
	W1210 07:26:07.823677   11224 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000660194s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:26:07.823677   11224 out.go:285] * 
	W1210 07:26:07.825673   11224 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:26:07.830674   11224 out.go:203] 
	W1210 07:26:07.833685   11224 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000660194s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:26:07.833685   11224 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:26:07.833685   11224 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:26:07.837675   11224 out.go:203] 
	
	
	==> Docker <==
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653477207Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653491208Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653496809Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653502209Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653531612Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653569015Z" level=info msg="Initializing buildkit"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.846125896Z" level=info msg="Completed buildkit initialization"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.854786460Z" level=info msg="Daemon has completed initialization"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.855010880Z" level=info msg="API listen on [::]:2376"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.855019980Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.855177894Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 07:17:27 no-preload-099700 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 07:17:28 no-preload-099700 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Loaded network plugin cni"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 07:17:28 no-preload-099700 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 10 07:18:02 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:18:02Z" level=info msg="Stop pulling image registry.k8s.io/etcd:3.6.6-0: Status: Downloaded newer image for registry.k8s.io/etcd:3.6.6-0"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:26:10.184253   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:10.185542   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:10.186666   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:10.187495   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:10.189848   11184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 07:26] CPU: 0 PID: 442139 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000005] RIP: 0033:0x7fa4c8168b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7fa4c8168af6.
	[  +0.000002] RSP: 002b:00007ffcec5b9c60 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +0.959943] CPU: 3 PID: 442297 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f8a7efcdb20
	[  +0.000006] Code: Unable to access opcode bytes at RIP 0x7f8a7efcdaf6.
	[  +0.000001] RSP: 002b:00007fffca681070 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 07:26:10 up  2:54,  0 user,  load average: 6.03, 5.57, 4.80
	Linux no-preload-099700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:26:06 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:26:07 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 10 07:26:07 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:07 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:07 no-preload-099700 kubelet[11009]: E1210 07:26:07.715928   11009 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:26:07 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:26:07 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:26:08 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 10 07:26:08 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:08 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:08 no-preload-099700 kubelet[11042]: E1210 07:26:08.484983   11042 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:26:08 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:26:08 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:26:09 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 10 07:26:09 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:09 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:09 no-preload-099700 kubelet[11068]: E1210 07:26:09.222342   11068 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:26:09 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:26:09 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:26:09 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 10 07:26:09 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:09 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:09 no-preload-099700 kubelet[11128]: E1210 07:26:09.954246   11128 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:26:09 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:26:09 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-099700 -n no-preload-099700
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-099700 -n no-preload-099700: exit status 6 (573.8625ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:26:11.052287   11964 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-099700" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-099700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (543.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (543.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-525200 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-rc.1
E1210 07:19:18.950538   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:19:29.055106   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:19:45.964866   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-525200 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-rc.1: exit status 109 (9m0.2452836s)

                                                
                                                
-- stdout --
	* [newest-cni-525200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "newest-cni-525200" primary control-plane node in "newest-cni-525200" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:18:33.786078    6232 out.go:360] Setting OutFile to fd 1556 ...
	I1210 07:18:33.831497    6232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:18:33.831497    6232 out.go:374] Setting ErrFile to fd 1016...
	I1210 07:18:33.831497    6232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:18:33.846542    6232 out.go:368] Setting JSON to false
	I1210 07:18:33.849415    6232 start.go:133] hostinfo: {"hostname":"minikube4","uptime":10045,"bootTime":1765341068,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 07:18:33.849415    6232 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 07:18:33.854305    6232 out.go:179] * [newest-cni-525200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 07:18:33.859981    6232 notify.go:221] Checking for updates...
	I1210 07:18:33.862418    6232 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:18:33.867733    6232 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:18:33.870895    6232 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 07:18:33.874005    6232 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:18:33.877905    6232 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:18:33.881123    6232 config.go:182] Loaded profile config "default-k8s-diff-port-144100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:18:33.881780    6232 config.go:182] Loaded profile config "kubernetes-upgrade-458400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:18:33.882133    6232 config.go:182] Loaded profile config "no-preload-099700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:18:33.882133    6232 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:18:34.024186    6232 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 07:18:34.027184    6232 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:18:34.266485    6232 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:18:34.248338703 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:18:34.269482    6232 out.go:179] * Using the docker driver based on user configuration
	I1210 07:18:34.272478    6232 start.go:309] selected driver: docker
	I1210 07:18:34.272478    6232 start.go:927] validating driver "docker" against <nil>
	I1210 07:18:34.272478    6232 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:18:34.319722    6232 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:18:34.556548    6232 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:18:34.537450086 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:18:34.556548    6232 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1210 07:18:34.556548    6232 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1210 07:18:34.557545    6232 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 07:18:34.560549    6232 out.go:179] * Using Docker Desktop driver with root privileges
	I1210 07:18:34.562545    6232 cni.go:84] Creating CNI manager for ""
	I1210 07:18:34.562545    6232 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 07:18:34.562545    6232 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 07:18:34.562545    6232 start.go:353] cluster config:
	{Name:newest-cni-525200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-525200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:18:34.565541    6232 out.go:179] * Starting "newest-cni-525200" primary control-plane node in "newest-cni-525200" cluster
	I1210 07:18:34.568552    6232 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 07:18:34.570543    6232 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:18:34.574541    6232 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 07:18:34.574541    6232 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 07:18:34.574541    6232 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1210 07:18:34.574541    6232 cache.go:65] Caching tarball of preloaded images
	I1210 07:18:34.574541    6232 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1210 07:18:34.574541    6232 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1210 07:18:34.575543    6232 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\config.json ...
	I1210 07:18:34.575543    6232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\config.json: {Name:mk4fb1b3f4b3cfc6bc7b2b1d4ba9b70420ca64fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:18:34.664560    6232 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:18:34.664560    6232 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 07:18:34.664560    6232 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:18:34.664560    6232 start.go:360] acquireMachinesLock for newest-cni-525200: {Name:mkd446da0a6d37aeadfde49218ee5d3bd06b715b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:18:34.664560    6232 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-525200"
	I1210 07:18:34.664560    6232 start.go:93] Provisioning new machine with config: &{Name:newest-cni-525200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-525200 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:18:34.664560    6232 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:18:34.671541    6232 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:18:34.671541    6232 start.go:159] libmachine.API.Create for "newest-cni-525200" (driver="docker")
	I1210 07:18:34.671541    6232 client.go:173] LocalClient.Create starting
	I1210 07:18:34.671541    6232 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1210 07:18:34.672554    6232 main.go:143] libmachine: Decoding PEM data...
	I1210 07:18:34.672554    6232 main.go:143] libmachine: Parsing certificate...
	I1210 07:18:34.672554    6232 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1210 07:18:34.672554    6232 main.go:143] libmachine: Decoding PEM data...
	I1210 07:18:34.672554    6232 main.go:143] libmachine: Parsing certificate...
	I1210 07:18:34.676549    6232 cli_runner.go:164] Run: docker network inspect newest-cni-525200 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:18:34.737547    6232 cli_runner.go:211] docker network inspect newest-cni-525200 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:18:34.740557    6232 network_create.go:284] running [docker network inspect newest-cni-525200] to gather additional debugging logs...
	I1210 07:18:34.740557    6232 cli_runner.go:164] Run: docker network inspect newest-cni-525200
	W1210 07:18:34.788552    6232 cli_runner.go:211] docker network inspect newest-cni-525200 returned with exit code 1
	I1210 07:18:34.788552    6232 network_create.go:287] error running [docker network inspect newest-cni-525200]: docker network inspect newest-cni-525200: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-525200 not found
	I1210 07:18:34.788552    6232 network_create.go:289] output of [docker network inspect newest-cni-525200]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-525200 not found
	
	** /stderr **
	I1210 07:18:34.791548    6232 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:18:34.862612    6232 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:18:34.893521    6232 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:18:34.909714    6232 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:18:34.925195    6232 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:18:34.940601    6232 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:18:34.956255    6232 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:18:34.972155    6232 network.go:209] skipping subnet 192.168.103.0/24 that is reserved: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:18:34.987890    6232 network.go:209] skipping subnet 192.168.112.0/24 that is reserved: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:18:35.001529    6232 network.go:206] using free private subnet 192.168.121.0/24: &{IP:192.168.121.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.121.0/24 Gateway:192.168.121.1 ClientMin:192.168.121.2 ClientMax:192.168.121.254 Broadcast:192.168.121.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016bfa40}
	I1210 07:18:35.001529    6232 network_create.go:124] attempt to create docker network newest-cni-525200 192.168.121.0/24 with gateway 192.168.121.1 and MTU of 1500 ...
	I1210 07:18:35.004681    6232 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.121.0/24 --gateway=192.168.121.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-525200 newest-cni-525200
	I1210 07:18:35.146514    6232 network_create.go:108] docker network newest-cni-525200 192.168.121.0/24 created
	I1210 07:18:35.146514    6232 kic.go:121] calculated static IP "192.168.121.2" for the "newest-cni-525200" container
	I1210 07:18:35.157628    6232 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:18:35.219736    6232 cli_runner.go:164] Run: docker volume create newest-cni-525200 --label name.minikube.sigs.k8s.io=newest-cni-525200 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:18:35.273747    6232 oci.go:103] Successfully created a docker volume newest-cni-525200
	I1210 07:18:35.276736    6232 cli_runner.go:164] Run: docker run --rm --name newest-cni-525200-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-525200 --entrypoint /usr/bin/test -v newest-cni-525200:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 07:18:38.989229    6232 cli_runner.go:217] Completed: docker run --rm --name newest-cni-525200-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-525200 --entrypoint /usr/bin/test -v newest-cni-525200:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (3.7124353s)
	I1210 07:18:38.989229    6232 oci.go:107] Successfully prepared a docker volume newest-cni-525200
	I1210 07:18:38.989229    6232 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 07:18:38.989229    6232 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 07:18:38.992529    6232 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-525200:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 07:18:57.722511    6232 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-525200:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (18.7296916s)
	I1210 07:18:57.722511    6232 kic.go:203] duration metric: took 18.7329921s to extract preloaded images to volume ...
	I1210 07:18:57.726493    6232 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:18:57.972101    6232 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:18:57.947419877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:18:57.976094    6232 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:18:58.227970    6232 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-525200 --name newest-cni-525200 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-525200 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-525200 --network newest-cni-525200 --ip 192.168.121.2 --volume newest-cni-525200:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 07:18:59.090751    6232 cli_runner.go:164] Run: docker container inspect newest-cni-525200 --format={{.State.Running}}
	I1210 07:18:59.156308    6232 cli_runner.go:164] Run: docker container inspect newest-cni-525200 --format={{.State.Status}}
	I1210 07:18:59.210298    6232 cli_runner.go:164] Run: docker exec newest-cni-525200 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:18:59.325485    6232 oci.go:144] the created container "newest-cni-525200" has a running status.
	I1210 07:18:59.325485    6232 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-525200\id_rsa...
	I1210 07:18:59.493174    6232 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-525200\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:18:59.569885    6232 cli_runner.go:164] Run: docker container inspect newest-cni-525200 --format={{.State.Status}}
	I1210 07:18:59.634901    6232 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:18:59.634901    6232 kic_runner.go:114] Args: [docker exec --privileged newest-cni-525200 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:18:59.750884    6232 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-525200\id_rsa...
	I1210 07:19:01.970499    6232 cli_runner.go:164] Run: docker container inspect newest-cni-525200 --format={{.State.Status}}
	I1210 07:19:02.027091    6232 machine.go:94] provisionDockerMachine start ...
	I1210 07:19:02.032045    6232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:19:02.087799    6232 main.go:143] libmachine: Using SSH client type: native
	I1210 07:19:02.101175    6232 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 56385 <nil> <nil>}
	I1210 07:19:02.101175    6232 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:19:02.281302    6232 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-525200
	
	I1210 07:19:02.281302    6232 ubuntu.go:182] provisioning hostname "newest-cni-525200"
	I1210 07:19:02.284935    6232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:19:02.342280    6232 main.go:143] libmachine: Using SSH client type: native
	I1210 07:19:02.342523    6232 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 56385 <nil> <nil>}
	I1210 07:19:02.342523    6232 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-525200 && echo "newest-cni-525200" | sudo tee /etc/hostname
	I1210 07:19:02.537233    6232 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-525200
	
	I1210 07:19:02.542234    6232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:19:02.599319    6232 main.go:143] libmachine: Using SSH client type: native
	I1210 07:19:02.600319    6232 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 56385 <nil> <nil>}
	I1210 07:19:02.600319    6232 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-525200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-525200/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-525200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:19:02.768432    6232 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:19:02.768432    6232 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 07:19:02.768432    6232 ubuntu.go:190] setting up certificates
	I1210 07:19:02.768432    6232 provision.go:84] configureAuth start
	I1210 07:19:02.771437    6232 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-525200
	I1210 07:19:02.829032    6232 provision.go:143] copyHostCerts
	I1210 07:19:02.829032    6232 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 07:19:02.829032    6232 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 07:19:02.829679    6232 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 07:19:02.830540    6232 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 07:19:02.830540    6232 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 07:19:02.830694    6232 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 07:19:02.831299    6232 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 07:19:02.831299    6232 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 07:19:02.833600    6232 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 07:19:02.834724    6232 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-525200 san=[127.0.0.1 192.168.121.2 localhost minikube newest-cni-525200]
	I1210 07:19:03.022342    6232 provision.go:177] copyRemoteCerts
	I1210 07:19:03.026587    6232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:19:03.029582    6232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:19:03.083478    6232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56385 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-525200\id_rsa Username:docker}
	I1210 07:19:03.220358    6232 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:19:03.250738    6232 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:19:03.280879    6232 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:19:03.311110    6232 provision.go:87] duration metric: took 542.6693ms to configureAuth
	I1210 07:19:03.311186    6232 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:19:03.311630    6232 config.go:182] Loaded profile config "newest-cni-525200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:19:03.316230    6232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:19:03.368200    6232 main.go:143] libmachine: Using SSH client type: native
	I1210 07:19:03.369196    6232 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 56385 <nil> <nil>}
	I1210 07:19:03.369196    6232 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 07:19:03.542235    6232 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 07:19:03.542235    6232 ubuntu.go:71] root file system type: overlay
	I1210 07:19:03.542235    6232 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 07:19:03.545768    6232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:19:03.606602    6232 main.go:143] libmachine: Using SSH client type: native
	I1210 07:19:03.607611    6232 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 56385 <nil> <nil>}
	I1210 07:19:03.607611    6232 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 07:19:03.806844    6232 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 07:19:03.812738    6232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:19:03.870178    6232 main.go:143] libmachine: Using SSH client type: native
	I1210 07:19:03.871453    6232 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 56385 <nil> <nil>}
	I1210 07:19:03.871521    6232 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 07:19:05.490807    6232 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-10 07:19:03.794419685 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1210 07:19:05.490880    6232 machine.go:97] duration metric: took 3.4637355s to provisionDockerMachine
	I1210 07:19:05.490880    6232 client.go:176] duration metric: took 30.8188614s to LocalClient.Create
	I1210 07:19:05.490912    6232 start.go:167] duration metric: took 30.8188938s to libmachine.API.Create "newest-cni-525200"
	I1210 07:19:05.490948    6232 start.go:293] postStartSetup for "newest-cni-525200" (driver="docker")
	I1210 07:19:05.490984    6232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:19:05.495048    6232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:19:05.497640    6232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:19:05.548635    6232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56385 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-525200\id_rsa Username:docker}
	I1210 07:19:05.681710    6232 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:19:05.689963    6232 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:19:05.689963    6232 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:19:05.689963    6232 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 07:19:05.690672    6232 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 07:19:05.691214    6232 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 07:19:05.697604    6232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:19:05.711053    6232 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 07:19:05.741367    6232 start.go:296] duration metric: took 250.4153ms for postStartSetup
	I1210 07:19:05.747102    6232 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-525200
	I1210 07:19:05.800004    6232 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\config.json ...
	I1210 07:19:05.805013    6232 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:19:05.809991    6232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:19:05.859998    6232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56385 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-525200\id_rsa Username:docker}
	I1210 07:19:05.997201    6232 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:19:06.007928    6232 start.go:128] duration metric: took 31.3428821s to createHost
	I1210 07:19:06.007928    6232 start.go:83] releasing machines lock for "newest-cni-525200", held for 31.3428821s
	I1210 07:19:06.011684    6232 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-525200
	I1210 07:19:06.067264    6232 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 07:19:06.071549    6232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:19:06.071549    6232 ssh_runner.go:195] Run: cat /version.json
	I1210 07:19:06.075494    6232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:19:06.124602    6232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56385 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-525200\id_rsa Username:docker}
	I1210 07:19:06.126054    6232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56385 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-525200\id_rsa Username:docker}
	W1210 07:19:06.246672    6232 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 07:19:06.260673    6232 ssh_runner.go:195] Run: systemctl --version
	I1210 07:19:06.274660    6232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:19:06.285536    6232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:19:06.291532    6232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W1210 07:19:06.342075    6232 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 07:19:06.342075    6232 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 07:19:06.342075    6232 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 07:19:06.342075    6232 start.go:496] detecting cgroup driver to use...
	I1210 07:19:06.342075    6232 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:19:06.342075    6232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:19:06.372047    6232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:19:06.390769    6232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:19:06.406783    6232 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:19:06.411312    6232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:19:06.432337    6232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:19:06.454699    6232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:19:06.473609    6232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:19:06.496273    6232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:19:06.516106    6232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:19:06.535308    6232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:19:06.556020    6232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:19:06.574521    6232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:19:06.590528    6232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:19:06.606523    6232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:19:06.738641    6232 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:19:06.935323    6232 start.go:496] detecting cgroup driver to use...
	I1210 07:19:06.935323    6232 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:19:06.943126    6232 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 07:19:06.971122    6232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:19:06.994930    6232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:19:07.071995    6232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:19:07.093997    6232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:19:07.113708    6232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:19:07.143074    6232 ssh_runner.go:195] Run: which cri-dockerd
	I1210 07:19:07.154004    6232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 07:19:07.170617    6232 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 07:19:07.194362    6232 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 07:19:07.341161    6232 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 07:19:07.480072    6232 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 07:19:07.480072    6232 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 07:19:07.510020    6232 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 07:19:07.536679    6232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:19:07.707424    6232 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 07:19:08.674015    6232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:19:08.696598    6232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 07:19:08.721642    6232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:19:08.749736    6232 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 07:19:08.885890    6232 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 07:19:09.040115    6232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:19:09.179660    6232 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 07:19:09.204713    6232 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 07:19:09.229717    6232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:19:09.386724    6232 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 07:19:09.497168    6232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:19:09.516273    6232 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 07:19:09.520951    6232 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 07:19:09.527917    6232 start.go:564] Will wait 60s for crictl version
	I1210 07:19:09.532651    6232 ssh_runner.go:195] Run: which crictl
	I1210 07:19:09.544585    6232 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:19:09.588158    6232 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 07:19:09.591838    6232 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:19:09.638057    6232 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:19:09.678953    6232 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.2 ...
	I1210 07:19:09.682480    6232 cli_runner.go:164] Run: docker exec -t newest-cni-525200 dig +short host.docker.internal
	I1210 07:19:09.811061    6232 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 07:19:09.814047    6232 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 07:19:09.821063    6232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:19:09.839039    6232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:19:09.896251    6232 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 07:19:09.898250    6232 kubeadm.go:884] updating cluster {Name:newest-cni-525200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-525200 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:19:09.898250    6232 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 07:19:09.901256    6232 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 07:19:09.935058    6232 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1210 07:19:09.935058    6232 docker.go:697] registry.k8s.io/etcd:3.6.5-0 wasn't preloaded
	I1210 07:19:09.939065    6232 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1210 07:19:09.957062    6232 ssh_runner.go:195] Run: which lz4
	I1210 07:19:09.970065    6232 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 07:19:09.977071    6232 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 07:19:09.978055    6232 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (284645196 bytes)
	I1210 07:19:12.877286    6232 docker.go:655] duration metric: took 2.9121765s to copy over tarball
	I1210 07:19:12.881289    6232 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 07:19:15.721493    6232 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.8401597s)
	I1210 07:19:15.721493    6232 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 07:19:15.780036    6232 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1210 07:19:15.794024    6232 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2652 bytes)
	I1210 07:19:15.819795    6232 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 07:19:15.842518    6232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:19:15.982808    6232 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 07:19:23.063511    6232 ssh_runner.go:235] Completed: sudo systemctl restart docker: (7.0805929s)
	I1210 07:19:23.069375    6232 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 07:19:23.105987    6232 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1210 07:19:23.105987    6232 docker.go:697] registry.k8s.io/etcd:3.6.5-0 wasn't preloaded
	I1210 07:19:23.105987    6232 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 07:19:23.123097    6232 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:19:23.128019    6232 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:19:23.130998    6232 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:19:23.132989    6232 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:19:23.140170    6232 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:19:23.140170    6232 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:19:23.146151    6232 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:19:23.148889    6232 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:19:23.152172    6232 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 07:19:23.152898    6232 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:19:23.158296    6232 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:19:23.159266    6232 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:19:23.164277    6232 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 07:19:23.166187    6232 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:19:23.171401    6232 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:19:23.176401    6232 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	W1210 07:19:23.205694    6232 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:19:23.262393    6232 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:19:23.317527    6232 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:19:23.366118    6232 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:19:23.425054    6232 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:19:23.474451    6232 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:19:23.528599    6232 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:19:23.582704    6232 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 07:19:23.691434    6232 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1210 07:19:23.702433    6232 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1210 07:19:23.712860    6232 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1210 07:19:23.731868    6232 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1210 07:19:23.747867    6232 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 07:19:23.764877    6232 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1210 07:19:23.806451    6232 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 07:19:23.837121    6232 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 07:19:23.837121    6232 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:19:23.837121    6232 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:19:23.841121    6232 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:19:23.869129    6232 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:19:23.873122    6232 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:19:23.882126    6232 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 07:19:23.883125    6232 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 07:19:23.938159    6232 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:19:24.148127    6232 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:19:24.148127    6232 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	I1210 07:19:25.558290    6232 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (1.4101412s)
	I1210 07:19:25.558363    6232 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1210 07:19:25.558422    6232 cache_images.go:125] Successfully loaded all cached images
	I1210 07:19:25.558422    6232 cache_images.go:94] duration metric: took 2.4523974s to LoadCachedImages
	I1210 07:19:25.558454    6232 kubeadm.go:935] updating node { 192.168.121.2 8443 v1.35.0-rc.1 docker true true} ...
	I1210 07:19:25.558512    6232 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-525200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.121.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-525200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:19:25.562209    6232 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 07:19:25.655113    6232 cni.go:84] Creating CNI manager for ""
	I1210 07:19:25.655113    6232 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 07:19:25.655113    6232 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 07:19:25.655113    6232 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.121.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-525200 NodeName:newest-cni-525200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.121.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.121.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:19:25.655113    6232 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.121.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-525200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.121.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.121.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:19:25.660470    6232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 07:19:25.673349    6232 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:19:25.678049    6232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:19:25.693712    6232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1210 07:19:25.716605    6232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 07:19:25.738691    6232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1210 07:19:25.766781    6232 ssh_runner.go:195] Run: grep 192.168.121.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:19:25.774366    6232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.121.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:19:25.796733    6232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:19:25.939226    6232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:19:25.962746    6232 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200 for IP: 192.168.121.2
	I1210 07:19:25.962746    6232 certs.go:195] generating shared ca certs ...
	I1210 07:19:25.962808    6232 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:19:25.963418    6232 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 07:19:25.963702    6232 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 07:19:25.963917    6232 certs.go:257] generating profile certs ...
	I1210 07:19:25.964310    6232 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\client.key
	I1210 07:19:25.964409    6232 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\client.crt with IP's: []
	I1210 07:19:26.031373    6232 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\client.crt ...
	I1210 07:19:26.031373    6232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\client.crt: {Name:mkc957f1caad1d80a737245ee6e6c37fa56194d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:19:26.031685    6232 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\client.key ...
	I1210 07:19:26.031685    6232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\client.key: {Name:mk8d591853f1d586790271720fcd3a3d25b49164 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:19:26.032616    6232 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\apiserver.key.96f8e4b6
	I1210 07:19:26.033384    6232 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\apiserver.crt.96f8e4b6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.121.2]
	I1210 07:19:26.099108    6232 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\apiserver.crt.96f8e4b6 ...
	I1210 07:19:26.099108    6232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\apiserver.crt.96f8e4b6: {Name:mkefcc3ebf118af2e47f339b187c794ae6a3ee7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:19:26.100154    6232 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\apiserver.key.96f8e4b6 ...
	I1210 07:19:26.100154    6232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\apiserver.key.96f8e4b6: {Name:mk5ab674dbd9b4db008a296a1dc4a1b910509547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:19:26.101203    6232 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\apiserver.crt.96f8e4b6 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\apiserver.crt
	I1210 07:19:26.114350    6232 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\apiserver.key.96f8e4b6 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\apiserver.key
	I1210 07:19:26.115515    6232 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\proxy-client.key
	I1210 07:19:26.115515    6232 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\proxy-client.crt with IP's: []
	I1210 07:19:26.179412    6232 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\proxy-client.crt ...
	I1210 07:19:26.179412    6232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\proxy-client.crt: {Name:mkcbf59c2b47dbd78bd4770b43b6f293e4b68d71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:19:26.180414    6232 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\proxy-client.key ...
	I1210 07:19:26.180414    6232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\proxy-client.key: {Name:mkee5174872a99386f384bf90319a276b883345c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:19:26.193455    6232 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 07:19:26.194314    6232 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 07:19:26.194314    6232 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 07:19:26.194589    6232 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 07:19:26.194724    6232 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 07:19:26.194902    6232 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 07:19:26.195074    6232 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 07:19:26.195361    6232 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:19:26.226070    6232 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:19:26.254435    6232 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:19:26.285868    6232 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:19:26.315315    6232 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:19:26.345043    6232 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:19:26.377501    6232 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:19:26.406247    6232 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:19:26.431754    6232 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 07:19:26.462373    6232 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:19:26.493857    6232 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 07:19:26.525237    6232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:19:26.553082    6232 ssh_runner.go:195] Run: openssl version
	I1210 07:19:26.567111    6232 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 07:19:26.586140    6232 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 07:19:26.604261    6232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 07:19:26.613730    6232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 07:19:26.618250    6232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 07:19:26.671114    6232 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:19:26.693241    6232 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/113042.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:19:26.712433    6232 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:19:26.735480    6232 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:19:26.758735    6232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:19:26.766942    6232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:19:26.771872    6232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:19:26.819023    6232 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:19:26.835899    6232 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:19:26.852567    6232 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 07:19:26.868553    6232 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 07:19:26.887140    6232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 07:19:26.897048    6232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 07:19:26.901901    6232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 07:19:26.951335    6232 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:19:26.970527    6232 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11304.pem /etc/ssl/certs/51391683.0
	I1210 07:19:26.987701    6232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:19:26.995314    6232 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:19:26.995603    6232 kubeadm.go:401] StartCluster: {Name:newest-cni-525200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-525200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:19:27.000000    6232 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 07:19:27.032713    6232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:19:27.050538    6232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:19:27.064781    6232 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:19:27.071196    6232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:19:27.086394    6232 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:19:27.086442    6232 kubeadm.go:158] found existing configuration files:
	
	I1210 07:19:27.091263    6232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:19:27.104981    6232 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:19:27.111344    6232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:19:27.132623    6232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:19:27.146208    6232 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:19:27.150489    6232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:19:27.167220    6232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:19:27.180960    6232 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:19:27.185010    6232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:19:27.203082    6232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:19:27.217699    6232 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:19:27.221979    6232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:19:27.239889    6232 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:19:27.354286    6232 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 07:19:27.445515    6232 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:19:27.544089    6232 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:23:29.140714    6232 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:23:29.140714    6232 kubeadm.go:319] 
	I1210 07:23:29.141783    6232 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:23:29.145571    6232 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 07:23:29.145835    6232 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:23:29.145835    6232 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:23:29.145835    6232 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 07:23:29.145835    6232 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 07:23:29.146439    6232 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 07:23:29.146621    6232 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 07:23:29.146726    6232 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 07:23:29.146867    6232 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 07:23:29.147014    6232 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 07:23:29.147065    6232 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 07:23:29.147065    6232 kubeadm.go:319] CONFIG_INET: enabled
	I1210 07:23:29.147065    6232 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 07:23:29.147065    6232 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 07:23:29.147065    6232 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 07:23:29.147593    6232 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 07:23:29.147777    6232 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 07:23:29.147920    6232 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 07:23:29.148078    6232 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 07:23:29.148353    6232 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 07:23:29.148484    6232 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 07:23:29.148669    6232 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 07:23:29.148669    6232 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 07:23:29.148669    6232 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 07:23:29.148669    6232 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 07:23:29.148669    6232 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 07:23:29.149198    6232 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 07:23:29.149396    6232 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 07:23:29.149601    6232 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 07:23:29.149760    6232 kubeadm.go:319] OS: Linux
	I1210 07:23:29.149885    6232 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:23:29.150044    6232 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:23:29.150152    6232 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:23:29.150276    6232 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:23:29.150476    6232 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:23:29.150649    6232 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:23:29.150811    6232 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:23:29.150945    6232 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:23:29.151131    6232 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:23:29.151131    6232 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:23:29.151131    6232 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:23:29.151131    6232 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:23:29.151729    6232 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:23:29.154048    6232 out.go:252]   - Generating certificates and keys ...
	I1210 07:23:29.154136    6232 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:23:29.154238    6232 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:23:29.154238    6232 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:23:29.154238    6232 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:23:29.154238    6232 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:23:29.154767    6232 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:23:29.154823    6232 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:23:29.154823    6232 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-525200] and IPs [192.168.121.2 127.0.0.1 ::1]
	I1210 07:23:29.154823    6232 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:23:29.155346    6232 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-525200] and IPs [192.168.121.2 127.0.0.1 ::1]
	I1210 07:23:29.155488    6232 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:23:29.155488    6232 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:23:29.155488    6232 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:23:29.156016    6232 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:23:29.156076    6232 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:23:29.156076    6232 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:23:29.156076    6232 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:23:29.156076    6232 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:23:29.156076    6232 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:23:29.156647    6232 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:23:29.156647    6232 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:23:29.160902    6232 out.go:252]   - Booting up control plane ...
	I1210 07:23:29.160902    6232 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:23:29.160902    6232 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:23:29.160902    6232 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:23:29.160902    6232 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:23:29.160902    6232 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:23:29.161895    6232 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:23:29.162228    6232 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:23:29.162228    6232 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:23:29.162228    6232 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:23:29.162228    6232 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:23:29.162228    6232 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001236976s
	I1210 07:23:29.162228    6232 kubeadm.go:319] 
	I1210 07:23:29.162228    6232 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:23:29.162228    6232 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:23:29.163231    6232 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:23:29.163231    6232 kubeadm.go:319] 
	I1210 07:23:29.163231    6232 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:23:29.163231    6232 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:23:29.163231    6232 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:23:29.163231    6232 kubeadm.go:319] 
	W1210 07:23:29.163231    6232 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-525200] and IPs [192.168.121.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-525200] and IPs [192.168.121.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001236976s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-525200] and IPs [192.168.121.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-525200] and IPs [192.168.121.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001236976s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:23:29.166790    6232 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1210 07:23:29.640099    6232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:23:29.661380    6232 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:23:29.666301    6232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:23:29.681607    6232 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:23:29.681607    6232 kubeadm.go:158] found existing configuration files:
	
	I1210 07:23:29.686280    6232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:23:29.701273    6232 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:23:29.705142    6232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:23:29.727206    6232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:23:29.739968    6232 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:23:29.743941    6232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:23:29.758937    6232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:23:29.774583    6232 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:23:29.779223    6232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:23:29.797692    6232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:23:29.810398    6232 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:23:29.815105    6232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:23:29.832391    6232 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:23:29.949812    6232 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 07:23:30.032177    6232 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:23:30.142316    6232 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:27:30.845427    6232 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 07:27:30.845427    6232 kubeadm.go:319] 
	I1210 07:27:30.846026    6232 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:27:30.849126    6232 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 07:27:30.849126    6232 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:27:30.849126    6232 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:27:30.849730    6232 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 07:27:30.849899    6232 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 07:27:30.850054    6232 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 07:27:30.850170    6232 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 07:27:30.850377    6232 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 07:27:30.850502    6232 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 07:27:30.850528    6232 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 07:27:30.850528    6232 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 07:27:30.850528    6232 kubeadm.go:319] CONFIG_INET: enabled
	I1210 07:27:30.850528    6232 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 07:27:30.850528    6232 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 07:27:30.851207    6232 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 07:27:30.851387    6232 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 07:27:30.851479    6232 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 07:27:30.851479    6232 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 07:27:30.851479    6232 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 07:27:30.851479    6232 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 07:27:30.852012    6232 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 07:27:30.852150    6232 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 07:27:30.852150    6232 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 07:27:30.852150    6232 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 07:27:30.852150    6232 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 07:27:30.852734    6232 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 07:27:30.852734    6232 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 07:27:30.852734    6232 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 07:27:30.852734    6232 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 07:27:30.852734    6232 kubeadm.go:319] OS: Linux
	I1210 07:27:30.852734    6232 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:27:30.853345    6232 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:27:30.853498    6232 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:27:30.853705    6232 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:27:30.853932    6232 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:27:30.854096    6232 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:27:30.854096    6232 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:27:30.854096    6232 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:27:30.854096    6232 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:27:30.854096    6232 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:27:30.854761    6232 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:27:30.855081    6232 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:27:30.855238    6232 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:27:32.136934    6232 out.go:252]   - Generating certificates and keys ...
	I1210 07:27:32.137702    6232 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:27:32.137951    6232 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:27:32.138057    6232 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:27:32.138229    6232 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:27:32.138420    6232 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:27:32.138420    6232 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:27:32.138420    6232 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:27:32.138420    6232 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:27:32.138420    6232 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:27:32.138953    6232 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:27:32.139119    6232 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:27:32.139293    6232 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:27:32.139454    6232 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:27:32.139561    6232 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:27:32.139676    6232 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:27:32.139890    6232 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:27:32.139890    6232 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:27:32.139890    6232 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:27:32.139890    6232 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:27:32.176956    6232 out.go:252]   - Booting up control plane ...
	I1210 07:27:32.177131    6232 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:27:32.177131    6232 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:27:32.177131    6232 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:27:32.177675    6232 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:27:32.177887    6232 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:27:32.177947    6232 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:27:32.177947    6232 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:27:32.177947    6232 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:27:32.178633    6232 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:27:32.178747    6232 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:27:32.178747    6232 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00091283s
	I1210 07:27:32.178747    6232 kubeadm.go:319] 
	I1210 07:27:32.178747    6232 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:27:32.179272    6232 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:27:32.179465    6232 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:27:32.179465    6232 kubeadm.go:319] 
	I1210 07:27:32.180034    6232 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:27:32.180034    6232 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:27:32.180034    6232 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:27:32.180034    6232 kubeadm.go:319] 
	I1210 07:27:32.180034    6232 kubeadm.go:403] duration metric: took 8m5.1768914s to StartCluster
	I1210 07:27:32.180034    6232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:27:32.184805    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:27:32.252290    6232 cri.go:89] found id: ""
	I1210 07:27:32.252290    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.252290    6232 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:27:32.252290    6232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:27:32.257295    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:27:32.524390    6232 cri.go:89] found id: ""
	I1210 07:27:32.524390    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.524390    6232 logs.go:284] No container was found matching "etcd"
	I1210 07:27:32.524390    6232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:27:32.529570    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:27:32.574711    6232 cri.go:89] found id: ""
	I1210 07:27:32.574765    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.574765    6232 logs.go:284] No container was found matching "coredns"
	I1210 07:27:32.574765    6232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:27:32.579249    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:27:32.620467    6232 cri.go:89] found id: ""
	I1210 07:27:32.620543    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.620543    6232 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:27:32.620543    6232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:27:32.624698    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:27:32.678505    6232 cri.go:89] found id: ""
	I1210 07:27:32.678505    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.678505    6232 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:27:32.678505    6232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:27:32.683647    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:27:32.734494    6232 cri.go:89] found id: ""
	I1210 07:27:32.734494    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.734494    6232 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:27:32.734494    6232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:27:32.740109    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:27:32.782096    6232 cri.go:89] found id: ""
	I1210 07:27:32.782096    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.782096    6232 logs.go:284] No container was found matching "kindnet"
	I1210 07:27:32.782096    6232 logs.go:123] Gathering logs for kubelet ...
	I1210 07:27:32.782096    6232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:27:32.848542    6232 logs.go:123] Gathering logs for dmesg ...
	I1210 07:27:32.848542    6232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:27:32.887692    6232 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:27:32.887692    6232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:27:32.974167    6232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:27:32.961911   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.962935   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.963846   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.967478   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.968591   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:27:32.961911   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.962935   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.963846   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.967478   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.968591   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:27:32.974167    6232 logs.go:123] Gathering logs for Docker ...
	I1210 07:27:32.974167    6232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:27:33.009144    6232 logs.go:123] Gathering logs for container status ...
	I1210 07:27:33.009144    6232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:27:33.065279    6232 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00091283s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:27:33.065279    6232 out.go:285] * 
	* 
	W1210 07:27:33.065279    6232 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00091283s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00091283s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:27:33.065279    6232 out.go:285] * 
	* 
	W1210 07:27:33.067510    6232 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:27:33.666818    6232 out.go:203] 
	W1210 07:27:33.825573    6232 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00091283s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00091283s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:27:33.825573    6232 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:27:33.825573    6232 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:27:33.873675    6232 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p newest-cni-525200 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-525200
helpers_test.go:244: (dbg) docker inspect newest-cni-525200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188",
	        "Created": "2025-12-10T07:18:58.277037255Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 386736,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:18:58.731857599Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188/hostname",
	        "HostsPath": "/var/lib/docker/containers/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188/hosts",
	        "LogPath": "/var/lib/docker/containers/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188-json.log",
	        "Name": "/newest-cni-525200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-525200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-525200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7e31d32294dc0b8b9ba09ebc0adce95d5ecde79d96faff02ddfcca2df2a49118-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7e31d32294dc0b8b9ba09ebc0adce95d5ecde79d96faff02ddfcca2df2a49118/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7e31d32294dc0b8b9ba09ebc0adce95d5ecde79d96faff02ddfcca2df2a49118/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7e31d32294dc0b8b9ba09ebc0adce95d5ecde79d96faff02ddfcca2df2a49118/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-525200",
	                "Source": "/var/lib/docker/volumes/newest-cni-525200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-525200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-525200",
	                "name.minikube.sigs.k8s.io": "newest-cni-525200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ee1da76fdf10ac9d4681072362e0cf44891c60757ab9c3416e1dbad070bcf47a",
	            "SandboxKey": "/var/run/docker/netns/ee1da76fdf10",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56385"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56386"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56387"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56383"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56384"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-525200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.121.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:79:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e73cdc5fd1be9396722947f498060ee7b5757251a78043b99e30abfea0ec658b",
	                    "EndpointID": "6249979e88a9b3e5e68a719fd3a78844751030cbdde0814c42ef0e5994cbd694",
	                    "Gateway": "192.168.121.1",
	                    "IPAddress": "192.168.121.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-525200",
	                        "6b7f9063cbda"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-525200 -n newest-cni-525200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-525200 -n newest-cni-525200: exit status 6 (602.3369ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:27:34.941270   11088 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-525200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-525200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-525200 logs -n 25: (1.4000915s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │          PROFILE          │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p enable-default-cni-648600 sudo systemctl cat kubelet --no-pager                                                        │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo journalctl -xeu kubelet --all --full --no-pager                                         │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo cat /etc/kubernetes/kubelet.conf                                                        │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo cat /var/lib/kubelet/config.yaml                                                        │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo systemctl status docker --all --full --no-pager                                         │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo systemctl cat docker --no-pager                                                         │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo cat /etc/docker/daemon.json                                                             │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo docker system info                                                                      │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo systemctl status cri-docker --all --full --no-pager                                     │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo systemctl cat cri-docker --no-pager                                                     │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo cat /usr/lib/systemd/system/cri-docker.service                                          │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo cri-dockerd --version                                                                   │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo systemctl status containerd --all --full --no-pager                                     │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo systemctl cat containerd --no-pager                                                     │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo cat /lib/systemd/system/containerd.service                                              │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo cat /etc/containerd/config.toml                                                         │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo containerd config dump                                                                  │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo systemctl status crio --all --full --no-pager                                           │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │                     │
	│ ssh     │ -p enable-default-cni-648600 sudo systemctl cat crio --no-pager                                                           │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                 │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo crio config                                                                             │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ delete  │ -p enable-default-cni-648600                                                                                              │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ start   │ -p kubenet-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker │ kubenet-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │                     │
	│ ssh     │ -p bridge-648600 pgrep -a kubelet                                                                                         │ bridge-648600             │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:27 UTC │ 10 Dec 25 07:27 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:26:48
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:26:48.266493    8148 out.go:360] Setting OutFile to fd 1904 ...
	I1210 07:26:48.309472    8148 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:26:48.309472    8148 out.go:374] Setting ErrFile to fd 1140...
	I1210 07:26:48.309472    8148 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:26:48.324472    8148 out.go:368] Setting JSON to false
	I1210 07:26:48.327483    8148 start.go:133] hostinfo: {"hostname":"minikube4","uptime":10540,"bootTime":1765341068,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 07:26:48.327483    8148 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 07:26:48.337470    8148 out.go:179] * [kubenet-648600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 07:26:48.341482    8148 notify.go:221] Checking for updates...
	I1210 07:26:48.341482    8148 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:26:48.344471    8148 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:26:48.348475    8148 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 07:26:48.350471    8148 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:26:48.352481    8148 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:26:48.355481    8148 config.go:182] Loaded profile config "bridge-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:26:48.356479    8148 config.go:182] Loaded profile config "newest-cni-525200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:26:48.356479    8148 config.go:182] Loaded profile config "no-preload-099700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:26:48.356479    8148 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:26:48.469490    8148 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 07:26:48.472884    8148 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:26:48.701836    8148 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:26:48.684462647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:26:48.708834    8148 out.go:179] * Using the docker driver based on user configuration
	I1210 07:26:48.710831    8148 start.go:309] selected driver: docker
	I1210 07:26:48.710831    8148 start.go:927] validating driver "docker" against <nil>
	I1210 07:26:48.710831    8148 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:26:48.750214    8148 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:26:48.989910    8148 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:26:48.972100581 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:26:48.989910    8148 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:26:48.990914    8148 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:26:48.992900    8148 out.go:179] * Using Docker Desktop driver with root privileges
	I1210 07:26:48.994901    8148 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1210 07:26:48.994901    8148 start.go:353] cluster config:
	{Name:kubenet-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kubenet-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:26:48.997899    8148 out.go:179] * Starting "kubenet-648600" primary control-plane node in "kubenet-648600" cluster
	I1210 07:26:48.999898    8148 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 07:26:49.001905    8148 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:26:49.003899    8148 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:26:49.003899    8148 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 07:26:49.041924    8148 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:26:49.075903    8148 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:26:49.075903    8148 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:26:49.333654    8148 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:26:49.333654    8148 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\config.json ...
	I1210 07:26:49.334674    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:26:49.334674    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:26:49.334674    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:26:49.334674    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:26:49.334674    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:26:49.334674    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:26:49.334674    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:26:49.334788    8148 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\config.json: {Name:mkaac7fa5349378c0496ed588d277fbc123f31fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:26:49.334788    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:26:49.336061    8148 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:26:49.336061    8148 start.go:360] acquireMachinesLock for kubenet-648600: {Name:mk6a48ff53a7089496e004db762788b363661fb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:26:49.336061    8148 start.go:364] duration metric: took 0s to acquireMachinesLock for "kubenet-648600"
	I1210 07:26:49.336061    8148 start.go:93] Provisioning new machine with config: &{Name:kubenet-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kubenet-648600 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:26:49.336657    8148 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:26:49.340104    8148 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:26:49.340610    8148 start.go:159] libmachine.API.Create for "kubenet-648600" (driver="docker")
	I1210 07:26:49.340757    8148 client.go:173] LocalClient.Create starting
	I1210 07:26:49.340893    8148 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1210 07:26:49.341421    8148 main.go:143] libmachine: Decoding PEM data...
	I1210 07:26:49.341512    8148 main.go:143] libmachine: Parsing certificate...
	I1210 07:26:49.341558    8148 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1210 07:26:49.341558    8148 main.go:143] libmachine: Decoding PEM data...
	I1210 07:26:49.341558    8148 main.go:143] libmachine: Parsing certificate...
	I1210 07:26:49.348012    8148 cli_runner.go:164] Run: docker network inspect kubenet-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:26:49.459518    8148 cli_runner.go:211] docker network inspect kubenet-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:26:49.468219    8148 network_create.go:284] running [docker network inspect kubenet-648600] to gather additional debugging logs...
	I1210 07:26:49.468219    8148 cli_runner.go:164] Run: docker network inspect kubenet-648600
	W1210 07:26:49.683727    8148 cli_runner.go:211] docker network inspect kubenet-648600 returned with exit code 1
	I1210 07:26:49.683727    8148 network_create.go:287] error running [docker network inspect kubenet-648600]: docker network inspect kubenet-648600: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubenet-648600 not found
	I1210 07:26:49.683727    8148 network_create.go:289] output of [docker network inspect kubenet-648600]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubenet-648600 not found
	
	** /stderr **
	I1210 07:26:49.688732    8148 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:26:49.787719    8148 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:26:49.823805    8148 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:26:50.041720    8148 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018195f0}
	I1210 07:26:50.041720    8148 network_create.go:124] attempt to create docker network kubenet-648600 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1210 07:26:50.046595    8148 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-648600 kubenet-648600
	W1210 07:26:50.801474    8148 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-648600 kubenet-648600 returned with exit code 1
	W1210 07:26:50.801474    8148 network_create.go:149] failed to create docker network kubenet-648600 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-648600 kubenet-648600: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1210 07:26:50.801474    8148 network_create.go:116] failed to create docker network kubenet-648600 192.168.67.0/24, will retry: subnet is taken
	I1210 07:26:50.923130    8148 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:26:51.134239    8148 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001de23f0}
	I1210 07:26:51.134239    8148 network_create.go:124] attempt to create docker network kubenet-648600 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 07:26:51.139999    8148 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-648600 kubenet-648600
	I1210 07:26:51.363502    8148 network_create.go:108] docker network kubenet-648600 192.168.76.0/24 created
	I1210 07:26:51.363502    8148 kic.go:121] calculated static IP "192.168.76.2" for the "kubenet-648600" container
	I1210 07:26:51.374867    8148 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:26:51.486602    8148 cli_runner.go:164] Run: docker volume create kubenet-648600 --label name.minikube.sigs.k8s.io=kubenet-648600 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:26:51.579595    8148 oci.go:103] Successfully created a docker volume kubenet-648600
	I1210 07:26:51.586596    8148 cli_runner.go:164] Run: docker run --rm --name kubenet-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-648600 --entrypoint /usr/bin/test -v kubenet-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 07:26:52.364576    8148 cache.go:107] acquiring lock: {Name:mk6b392ee3857c8c549222e5d4bc0d459f0d6374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:26:52.365577    8148 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 exists
	I1210 07:26:52.365577    8148 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.12.1" took 3.0307412s
	I1210 07:26:52.365577    8148 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 succeeded
	I1210 07:26:52.365577    8148 cache.go:107] acquiring lock: {Name:mk712909f8cebe825fb8a435a44c90c8d2ca7c32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:26:52.365577    8148 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 exists
	I1210 07:26:52.366586    8148 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.34.3" took 3.0305751s
	I1210 07:26:52.366586    8148 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 succeeded
	I1210 07:26:52.366586    8148 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:26:52.367587    8148 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1210 07:26:52.367587    8148 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.0328652s
	I1210 07:26:52.367587    8148 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1210 07:26:52.413544    8148 cache.go:107] acquiring lock: {Name:mk2c17fa70a505b9214cef4e32cab64efc0821c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:26:52.413544    8148 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 exists
	I1210 07:26:52.413544    8148 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.34.3" took 3.0788213s
	I1210 07:26:52.413544    8148 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 succeeded
	I1210 07:26:52.427545    8148 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:26:52.428562    8148 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1210 07:26:52.428562    8148 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.0935591s
	I1210 07:26:52.428562    8148 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1210 07:26:52.449539    8148 cache.go:107] acquiring lock: {Name:mkc3ba12d2dc6e754bbb22b72e061ebdaf85fee6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:26:52.449539    8148 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 exists
	I1210 07:26:52.449539    8148 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.34.3" took 3.1145826s
	I1210 07:26:52.449539    8148 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 succeeded
	I1210 07:26:52.487204    8148 cache.go:107] acquiring lock: {Name:mk4347e5873954a8abe84535c3dbf0018de433f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:26:52.487204    8148 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 exists
	I1210 07:26:52.487204    8148 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.34.3" took 3.1522478s
	I1210 07:26:52.487204    8148 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 succeeded
	I1210 07:26:52.488221    8148 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:26:52.488221    8148 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1210 07:26:52.488221    8148 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.153497s
	I1210 07:26:52.488221    8148 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1210 07:26:52.488221    8148 cache.go:87] Successfully saved all images to host disk.
	I1210 07:26:53.284247    8148 cli_runner.go:217] Completed: docker run --rm --name kubenet-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-648600 --entrypoint /usr/bin/test -v kubenet-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.6976247s)
	I1210 07:26:53.284247    8148 oci.go:107] Successfully prepared a docker volume kubenet-648600
	I1210 07:26:53.284247    8148 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:26:53.288253    8148 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:26:53.801115    4804 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1210 07:26:53.801115    4804 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:26:53.801115    4804 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:26:53.801115    4804 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:26:53.802154    4804 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:26:53.802154    4804 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:26:53.805187    4804 out.go:252]   - Generating certificates and keys ...
	I1210 07:26:53.805187    4804 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:26:53.805727    4804 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:26:53.806128    4804 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:26:53.806409    4804 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:26:53.806657    4804 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:26:53.806958    4804 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:26:53.807223    4804 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:26:53.807689    4804 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [bridge-648600 localhost] and IPs [192.168.112.2 127.0.0.1 ::1]
	I1210 07:26:53.807736    4804 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:26:53.807736    4804 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [bridge-648600 localhost] and IPs [192.168.112.2 127.0.0.1 ::1]
	I1210 07:26:53.807736    4804 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:26:53.808275    4804 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:26:53.808492    4804 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:26:53.808659    4804 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:26:53.808757    4804 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:26:53.808852    4804 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:26:53.808900    4804 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:26:53.808900    4804 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:26:53.808900    4804 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:26:53.809439    4804 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:26:53.809681    4804 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:26:53.811927    4804 out.go:252]   - Booting up control plane ...
	I1210 07:26:53.812166    4804 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:26:53.812359    4804 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:26:53.812549    4804 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:26:53.812864    4804 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:26:53.812901    4804 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:26:53.812901    4804 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:26:53.812901    4804 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:26:53.813434    4804 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:26:53.813781    4804 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:26:53.814076    4804 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:26:53.814242    4804 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 547.278359ms
	I1210 07:26:53.814462    4804 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 07:26:53.814642    4804 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.112.2:8443/livez
	I1210 07:26:53.814807    4804 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 07:26:53.815044    4804 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 07:26:53.815084    4804 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 11.930253219s
	I1210 07:26:53.815084    4804 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 12.602564259s
	I1210 07:26:53.815084    4804 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 17.003641444s
	I1210 07:26:53.815621    4804 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 07:26:53.816061    4804 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 07:26:53.816185    4804 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 07:26:53.816693    4804 kubeadm.go:319] [mark-control-plane] Marking the node bridge-648600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 07:26:53.816805    4804 kubeadm.go:319] [bootstrap-token] Using token: x3nlvh.opxvhtc30zotsvgx
	I1210 07:26:53.819383    4804 out.go:252]   - Configuring RBAC rules ...
	I1210 07:26:53.819553    4804 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 07:26:53.819780    4804 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 07:26:53.819826    4804 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 07:26:53.819826    4804 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 07:26:53.820458    4804 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 07:26:53.820458    4804 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 07:26:53.820458    4804 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 07:26:53.821076    4804 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 07:26:53.821076    4804 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 07:26:53.821076    4804 kubeadm.go:319] 
	I1210 07:26:53.821076    4804 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 07:26:53.821076    4804 kubeadm.go:319] 
	I1210 07:26:53.821076    4804 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 07:26:53.821076    4804 kubeadm.go:319] 
	I1210 07:26:53.821076    4804 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 07:26:53.821649    4804 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 07:26:53.821695    4804 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 07:26:53.821695    4804 kubeadm.go:319] 
	I1210 07:26:53.821695    4804 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 07:26:53.821695    4804 kubeadm.go:319] 
	I1210 07:26:53.821695    4804 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 07:26:53.821695    4804 kubeadm.go:319] 
	I1210 07:26:53.821695    4804 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 07:26:53.822281    4804 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 07:26:53.822281    4804 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 07:26:53.822281    4804 kubeadm.go:319] 
	I1210 07:26:53.822281    4804 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 07:26:53.822281    4804 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 07:26:53.822281    4804 kubeadm.go:319] 
	I1210 07:26:53.822880    4804 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token x3nlvh.opxvhtc30zotsvgx \
	I1210 07:26:53.822989    4804 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2501b4ef72a9792bee16677b10e6bb41bd6980e44395cdcd843b1bcd1dba3c3b \
	I1210 07:26:53.822989    4804 kubeadm.go:319] 	--control-plane 
	I1210 07:26:53.822989    4804 kubeadm.go:319] 
	I1210 07:26:53.822989    4804 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 07:26:53.822989    4804 kubeadm.go:319] 
	I1210 07:26:53.822989    4804 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token x3nlvh.opxvhtc30zotsvgx \
	I1210 07:26:53.822989    4804 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2501b4ef72a9792bee16677b10e6bb41bd6980e44395cdcd843b1bcd1dba3c3b 
	I1210 07:26:53.822989    4804 cni.go:84] Creating CNI manager for "bridge"
	I1210 07:26:53.825853    4804 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 07:26:53.832854    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 07:26:53.880596    4804 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 07:26:53.976368    4804 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 07:26:53.985234    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:53.986738    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-648600 minikube.k8s.io/updated_at=2025_12_10T07_26_53_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=bridge-648600 minikube.k8s.io/primary=true
	I1210 07:26:54.009995    4804 ops.go:34] apiserver oom_adj: -16
	I1210 07:26:54.190378    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:53.550095    8148 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:26:53.530757229 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:26:53.553467    8148 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:26:53.815828    8148 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-648600 --name kubenet-648600 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-648600 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-648600 --network kubenet-648600 --ip 192.168.76.2 --volume kubenet-648600:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 07:26:54.519455    8148 cli_runner.go:164] Run: docker container inspect kubenet-648600 --format={{.State.Running}}
	I1210 07:26:54.581248    8148 cli_runner.go:164] Run: docker container inspect kubenet-648600 --format={{.State.Status}}
	I1210 07:26:54.641935    8148 cli_runner.go:164] Run: docker exec kubenet-648600 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:26:54.756925    8148 oci.go:144] the created container "kubenet-648600" has a running status.
	I1210 07:26:54.756925    8148 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-648600\id_rsa...
	I1210 07:26:54.811927    8148 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-648600\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:26:54.884927    8148 cli_runner.go:164] Run: docker container inspect kubenet-648600 --format={{.State.Status}}
	I1210 07:26:54.944940    8148 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:26:54.945950    8148 kic_runner.go:114] Args: [docker exec --privileged kubenet-648600 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:26:55.067842    8148 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-648600\id_rsa...
	I1210 07:26:57.304431    8148 cli_runner.go:164] Run: docker container inspect kubenet-648600 --format={{.State.Status}}
	I1210 07:26:57.356002    8148 machine.go:94] provisionDockerMachine start ...
	I1210 07:26:57.358999    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:26:57.409999    8148 main.go:143] libmachine: Using SSH client type: native
	I1210 07:26:57.423964    8148 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57306 <nil> <nil>}
	I1210 07:26:57.423964    8148 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:26:57.601161    8148 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-648600
	
	I1210 07:26:57.601161    8148 ubuntu.go:182] provisioning hostname "kubenet-648600"
	I1210 07:26:57.603876    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:26:57.663309    8148 main.go:143] libmachine: Using SSH client type: native
	I1210 07:26:57.663799    8148 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57306 <nil> <nil>}
	I1210 07:26:57.663874    8148 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubenet-648600 && echo "kubenet-648600" | sudo tee /etc/hostname
	I1210 07:26:57.856555    8148 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-648600
	
	I1210 07:26:57.860405    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:26:57.919561    8148 main.go:143] libmachine: Using SSH client type: native
	I1210 07:26:57.919561    8148 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57306 <nil> <nil>}
	I1210 07:26:57.919561    8148 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-648600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-648600/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-648600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:26:58.104063    8148 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:26:58.104063    8148 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 07:26:58.104121    8148 ubuntu.go:190] setting up certificates
	I1210 07:26:58.104162    8148 provision.go:84] configureAuth start
	I1210 07:26:58.107864    8148 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-648600
	I1210 07:26:58.171039    8148 provision.go:143] copyHostCerts
	I1210 07:26:58.171691    8148 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 07:26:58.171691    8148 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 07:26:58.171691    8148 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 07:26:58.172425    8148 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 07:26:58.172949    8148 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 07:26:58.173167    8148 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 07:26:58.173800    8148 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 07:26:58.173800    8148 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 07:26:58.173800    8148 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 07:26:58.174689    8148 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubenet-648600 san=[127.0.0.1 192.168.76.2 kubenet-648600 localhost minikube]
	I1210 07:26:58.255032    8148 provision.go:177] copyRemoteCerts
	I1210 07:26:58.258056    8148 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:26:58.261058    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:26:54.691938    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:55.191139    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:55.690239    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:56.191849    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:56.688904    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:57.191206    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:57.689800    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:58.189890    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:58.691249    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:59.274789    4804 kubeadm.go:1114] duration metric: took 5.2983382s to wait for elevateKubeSystemPrivileges
	I1210 07:26:59.274789    4804 kubeadm.go:403] duration metric: took 30.2725617s to StartCluster
	I1210 07:26:59.274789    4804 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:26:59.274789    4804 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:26:59.276563    4804 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:26:59.276767    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 07:26:59.276767    4804 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.112.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:26:59.276767    4804 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:26:59.276767    4804 addons.go:70] Setting storage-provisioner=true in profile "bridge-648600"
	I1210 07:26:59.276767    4804 addons.go:239] Setting addon storage-provisioner=true in "bridge-648600"
	I1210 07:26:59.276767    4804 addons.go:70] Setting default-storageclass=true in profile "bridge-648600"
	I1210 07:26:59.276767    4804 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-648600"
	I1210 07:26:59.276767    4804 host.go:66] Checking if "bridge-648600" exists ...
	I1210 07:26:59.276767    4804 config.go:182] Loaded profile config "bridge-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:26:59.286058    4804 out.go:179] * Verifying Kubernetes components...
	I1210 07:26:59.287899    4804 cli_runner.go:164] Run: docker container inspect bridge-648600 --format={{.State.Status}}
	I1210 07:26:59.287948    4804 cli_runner.go:164] Run: docker container inspect bridge-648600 --format={{.State.Status}}
	I1210 07:26:59.293641    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:26:59.356090    4804 addons.go:239] Setting addon default-storageclass=true in "bridge-648600"
	I1210 07:26:59.356090    4804 host.go:66] Checking if "bridge-648600" exists ...
	I1210 07:26:59.360086    4804 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:26:59.362082    4804 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:26:59.362082    4804 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:26:59.363112    4804 cli_runner.go:164] Run: docker container inspect bridge-648600 --format={{.State.Status}}
	I1210 07:26:59.366098    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:59.424089    4804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57145 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa Username:docker}
	I1210 07:26:59.438092    4804 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:26:59.438092    4804 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:26:59.441091    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:59.499085    4804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57145 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa Username:docker}
	I1210 07:26:59.767231    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 07:26:59.803181    4804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:26:59.973487    4804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:26:59.995416    4804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:27:00.469947    4804 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1210 07:27:00.475439    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:27:00.538068    4804 node_ready.go:35] waiting up to 15m0s for node "bridge-648600" to be "Ready" ...
	I1210 07:27:00.566207    4804 node_ready.go:49] node "bridge-648600" is "Ready"
	I1210 07:27:00.566420    4804 node_ready.go:38] duration metric: took 28.3072ms for node "bridge-648600" to be "Ready" ...
	I1210 07:27:00.566493    4804 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:27:00.573540    4804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:27:01.065842    4804 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-648600" context rescaled to 1 replicas
	I1210 07:27:01.467730    4804 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.4942194s)
	I1210 07:27:01.467730    4804 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.472291s)
	I1210 07:27:01.467730    4804 api_server.go:72] duration metric: took 2.1909285s to wait for apiserver process to appear ...
	I1210 07:27:01.467730    4804 api_server.go:88] waiting for apiserver healthz status ...
	I1210 07:27:01.468738    4804 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57149/healthz ...
	I1210 07:27:01.487740    4804 api_server.go:279] https://127.0.0.1:57149/healthz returned 200:
	ok
	I1210 07:27:01.490750    4804 api_server.go:141] control plane version: v1.34.3
	I1210 07:27:01.490750    4804 api_server.go:131] duration metric: took 22.0114ms to wait for apiserver health ...
	I1210 07:27:01.490750    4804 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 07:27:01.495741    4804 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 07:26:58.316687    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57306 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-648600\id_rsa Username:docker}
	I1210 07:26:58.448850    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:26:58.481116    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1210 07:26:58.509443    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:26:58.539652    8148 provision.go:87] duration metric: took 435.4835ms to configureAuth
	I1210 07:26:58.539652    8148 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:26:58.540261    8148 config.go:182] Loaded profile config "kubenet-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:26:58.543649    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:26:58.601090    8148 main.go:143] libmachine: Using SSH client type: native
	I1210 07:26:58.601090    8148 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57306 <nil> <nil>}
	I1210 07:26:58.601090    8148 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 07:26:58.781484    8148 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 07:26:58.781484    8148 ubuntu.go:71] root file system type: overlay
	I1210 07:26:58.781484    8148 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 07:26:58.786185    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:26:58.844618    8148 main.go:143] libmachine: Using SSH client type: native
	I1210 07:26:58.844618    8148 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57306 <nil> <nil>}
	I1210 07:26:58.844618    8148 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 07:26:59.066374    8148 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 07:26:59.073011    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:26:59.134607    8148 main.go:143] libmachine: Using SSH client type: native
	I1210 07:26:59.135262    8148 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57306 <nil> <nil>}
	I1210 07:26:59.135262    8148 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 07:27:00.643707    8148 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-10 07:26:59.058519910 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1210 07:27:00.643707    8148 machine.go:97] duration metric: took 3.2876533s to provisionDockerMachine
	I1210 07:27:00.643707    8148 client.go:176] duration metric: took 11.3027721s to LocalClient.Create
	I1210 07:27:00.643707    8148 start.go:167] duration metric: took 11.3029188s to libmachine.API.Create "kubenet-648600"
	I1210 07:27:00.643707    8148 start.go:293] postStartSetup for "kubenet-648600" (driver="docker")
	I1210 07:27:00.643707    8148 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:27:00.649827    8148 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:27:00.653512    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:27:00.716857    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57306 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-648600\id_rsa Username:docker}
	I1210 07:27:00.852079    8148 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:27:00.860820    8148 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:27:00.860820    8148 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:27:00.860820    8148 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 07:27:00.860820    8148 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 07:27:00.860820    8148 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 07:27:00.869338    8148 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:27:00.884445    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 07:27:00.918715    8148 start.go:296] duration metric: took 274.9457ms for postStartSetup
	I1210 07:27:00.925503    8148 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-648600
	I1210 07:27:00.987935    8148 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\config.json ...
	I1210 07:27:00.993933    8148 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:27:00.996277    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:27:01.051137    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57306 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-648600\id_rsa Username:docker}
	I1210 07:27:01.182215    8148 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:27:01.194199    8148 start.go:128] duration metric: took 11.8573561s to createHost
	I1210 07:27:01.194242    8148 start.go:83] releasing machines lock for "kubenet-648600", held for 11.8579948s
	I1210 07:27:01.197930    8148 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-648600
	I1210 07:27:01.248755    8148 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 07:27:01.252745    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:27:01.252745    8148 ssh_runner.go:195] Run: cat /version.json
	I1210 07:27:01.256527    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:27:01.318008    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57306 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-648600\id_rsa Username:docker}
	I1210 07:27:01.319032    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57306 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-648600\id_rsa Username:docker}
	W1210 07:27:01.443737    8148 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 07:27:01.447734    8148 ssh_runner.go:195] Run: systemctl --version
	I1210 07:27:01.462734    8148 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:27:01.474736    8148 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:27:01.478735    8148 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:27:01.547557    8148 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 07:27:01.547557    8148 start.go:496] detecting cgroup driver to use...
	I1210 07:27:01.547557    8148 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:27:01.547557    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1210 07:27:01.555870    8148 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 07:27:01.555933    8148 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 07:27:01.583378    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:27:01.603382    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:27:01.624306    8148 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:27:01.629592    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:27:01.650610    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:27:01.670408    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:27:01.693106    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:27:01.714490    8148 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:27:01.733417    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:27:01.754000    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:27:01.776716    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:27:01.809438    8148 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:27:01.826200    8148 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:27:01.841594    8148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:27:02.005837    8148 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:27:02.170025    8148 start.go:496] detecting cgroup driver to use...
	I1210 07:27:02.170025    8148 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:27:02.175824    8148 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 07:27:02.202812    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:27:02.227869    8148 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:27:02.283513    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:27:02.307088    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:27:02.329964    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:27:02.360310    8148 ssh_runner.go:195] Run: which cri-dockerd
	I1210 07:27:02.373353    8148 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 07:27:02.387541    8148 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (196 bytes)
	I1210 07:27:02.414437    8148 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 07:27:02.553484    8148 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 07:27:02.664300    8148 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 07:27:02.664561    8148 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 07:27:02.696919    8148 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 07:27:02.720791    8148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:27:02.872487    8148 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 07:27:01.498736    4804 addons.go:530] duration metric: took 2.2219344s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 07:27:01.510837    4804 system_pods.go:59] 8 kube-system pods found
	I1210 07:27:01.510947    4804 system_pods.go:61] "coredns-66bc5c9577-drdxd" [cf0185d0-fabc-4045-82ae-1985176ac7d2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:01.510947    4804 system_pods.go:61] "coredns-66bc5c9577-w2ff8" [d3ef0b65-8051-4f98-8a05-4b512fed48f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:01.510947    4804 system_pods.go:61] "etcd-bridge-648600" [d1be9de0-1547-4bea-96a2-6c6f18b0b5c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:27:01.510947    4804 system_pods.go:61] "kube-apiserver-bridge-648600" [49a7a37c-ed2d-42f8-9ebd-8e0fc25dfad8] Running
	I1210 07:27:01.510947    4804 system_pods.go:61] "kube-controller-manager-bridge-648600" [f0f8b824-6039-4158-99c3-0fbbb537fb95] Running
	I1210 07:27:01.511017    4804 system_pods.go:61] "kube-proxy-rvxdz" [0b48ba7e-6574-4dbf-9a6f-0cf235bd2b59] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:27:01.511061    4804 system_pods.go:61] "kube-scheduler-bridge-648600" [2fa395e0-512f-44f2-a6e6-d31024c11c77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:27:01.511061    4804 system_pods.go:61] "storage-provisioner" [f3e50e94-3f5d-4c96-9aec-ba2e61f60ec0] Pending
	I1210 07:27:01.511061    4804 system_pods.go:74] duration metric: took 20.3103ms to wait for pod list to return data ...
	I1210 07:27:01.511114    4804 default_sa.go:34] waiting for default service account to be created ...
	I1210 07:27:01.515919    4804 default_sa.go:45] found service account: "default"
	I1210 07:27:01.515919    4804 default_sa.go:55] duration metric: took 4.8051ms for default service account to be created ...
	I1210 07:27:01.515919    4804 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 07:27:01.529893    4804 system_pods.go:86] 8 kube-system pods found
	I1210 07:27:01.529926    4804 system_pods.go:89] "coredns-66bc5c9577-drdxd" [cf0185d0-fabc-4045-82ae-1985176ac7d2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:01.529926    4804 system_pods.go:89] "coredns-66bc5c9577-w2ff8" [d3ef0b65-8051-4f98-8a05-4b512fed48f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:01.529962    4804 system_pods.go:89] "etcd-bridge-648600" [d1be9de0-1547-4bea-96a2-6c6f18b0b5c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:27:01.529962    4804 system_pods.go:89] "kube-apiserver-bridge-648600" [49a7a37c-ed2d-42f8-9ebd-8e0fc25dfad8] Running
	I1210 07:27:01.529982    4804 system_pods.go:89] "kube-controller-manager-bridge-648600" [f0f8b824-6039-4158-99c3-0fbbb537fb95] Running
	I1210 07:27:01.529982    4804 system_pods.go:89] "kube-proxy-rvxdz" [0b48ba7e-6574-4dbf-9a6f-0cf235bd2b59] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:27:01.529982    4804 system_pods.go:89] "kube-scheduler-bridge-648600" [2fa395e0-512f-44f2-a6e6-d31024c11c77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:27:01.529982    4804 system_pods.go:89] "storage-provisioner" [f3e50e94-3f5d-4c96-9aec-ba2e61f60ec0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:27:01.530046    4804 retry.go:31] will retry after 254.830899ms: missing components: kube-dns, kube-proxy
	I1210 07:27:01.793715    4804 system_pods.go:86] 8 kube-system pods found
	I1210 07:27:01.793715    4804 system_pods.go:89] "coredns-66bc5c9577-drdxd" [cf0185d0-fabc-4045-82ae-1985176ac7d2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:01.793715    4804 system_pods.go:89] "coredns-66bc5c9577-w2ff8" [d3ef0b65-8051-4f98-8a05-4b512fed48f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:01.793715    4804 system_pods.go:89] "etcd-bridge-648600" [d1be9de0-1547-4bea-96a2-6c6f18b0b5c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:27:01.793715    4804 system_pods.go:89] "kube-apiserver-bridge-648600" [49a7a37c-ed2d-42f8-9ebd-8e0fc25dfad8] Running
	I1210 07:27:01.793715    4804 system_pods.go:89] "kube-controller-manager-bridge-648600" [f0f8b824-6039-4158-99c3-0fbbb537fb95] Running
	I1210 07:27:01.793715    4804 system_pods.go:89] "kube-proxy-rvxdz" [0b48ba7e-6574-4dbf-9a6f-0cf235bd2b59] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:27:01.793715    4804 system_pods.go:89] "kube-scheduler-bridge-648600" [2fa395e0-512f-44f2-a6e6-d31024c11c77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:27:01.793715    4804 system_pods.go:89] "storage-provisioner" [f3e50e94-3f5d-4c96-9aec-ba2e61f60ec0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:27:01.793715    4804 retry.go:31] will retry after 366.083663ms: missing components: kube-dns, kube-proxy
	I1210 07:27:02.170025    4804 system_pods.go:86] 8 kube-system pods found
	I1210 07:27:02.170118    4804 system_pods.go:89] "coredns-66bc5c9577-drdxd" [cf0185d0-fabc-4045-82ae-1985176ac7d2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:02.170118    4804 system_pods.go:89] "coredns-66bc5c9577-w2ff8" [d3ef0b65-8051-4f98-8a05-4b512fed48f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:02.170158    4804 system_pods.go:89] "etcd-bridge-648600" [d1be9de0-1547-4bea-96a2-6c6f18b0b5c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:27:02.170158    4804 system_pods.go:89] "kube-apiserver-bridge-648600" [49a7a37c-ed2d-42f8-9ebd-8e0fc25dfad8] Running
	I1210 07:27:02.170158    4804 system_pods.go:89] "kube-controller-manager-bridge-648600" [f0f8b824-6039-4158-99c3-0fbbb537fb95] Running
	I1210 07:27:02.170158    4804 system_pods.go:89] "kube-proxy-rvxdz" [0b48ba7e-6574-4dbf-9a6f-0cf235bd2b59] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:27:02.170158    4804 system_pods.go:89] "kube-scheduler-bridge-648600" [2fa395e0-512f-44f2-a6e6-d31024c11c77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:27:02.170158    4804 system_pods.go:89] "storage-provisioner" [f3e50e94-3f5d-4c96-9aec-ba2e61f60ec0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:27:02.170263    4804 retry.go:31] will retry after 379.768039ms: missing components: kube-dns, kube-proxy
	I1210 07:27:02.560125    4804 system_pods.go:86] 8 kube-system pods found
	I1210 07:27:02.560125    4804 system_pods.go:89] "coredns-66bc5c9577-drdxd" [cf0185d0-fabc-4045-82ae-1985176ac7d2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:02.560125    4804 system_pods.go:89] "coredns-66bc5c9577-w2ff8" [d3ef0b65-8051-4f98-8a05-4b512fed48f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:02.560125    4804 system_pods.go:89] "etcd-bridge-648600" [d1be9de0-1547-4bea-96a2-6c6f18b0b5c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:27:02.560125    4804 system_pods.go:89] "kube-apiserver-bridge-648600" [49a7a37c-ed2d-42f8-9ebd-8e0fc25dfad8] Running
	I1210 07:27:02.560125    4804 system_pods.go:89] "kube-controller-manager-bridge-648600" [f0f8b824-6039-4158-99c3-0fbbb537fb95] Running
	I1210 07:27:02.560125    4804 system_pods.go:89] "kube-proxy-rvxdz" [0b48ba7e-6574-4dbf-9a6f-0cf235bd2b59] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:27:02.560125    4804 system_pods.go:89] "kube-scheduler-bridge-648600" [2fa395e0-512f-44f2-a6e6-d31024c11c77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:27:02.560125    4804 system_pods.go:89] "storage-provisioner" [f3e50e94-3f5d-4c96-9aec-ba2e61f60ec0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:27:02.560125    4804 retry.go:31] will retry after 606.226493ms: missing components: kube-dns, kube-proxy
	I1210 07:27:03.174432    4804 system_pods.go:86] 8 kube-system pods found
	I1210 07:27:03.174459    4804 system_pods.go:89] "coredns-66bc5c9577-drdxd" [cf0185d0-fabc-4045-82ae-1985176ac7d2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:03.174529    4804 system_pods.go:89] "coredns-66bc5c9577-w2ff8" [d3ef0b65-8051-4f98-8a05-4b512fed48f9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:03.174556    4804 system_pods.go:89] "etcd-bridge-648600" [d1be9de0-1547-4bea-96a2-6c6f18b0b5c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:27:03.174556    4804 system_pods.go:89] "kube-apiserver-bridge-648600" [49a7a37c-ed2d-42f8-9ebd-8e0fc25dfad8] Running
	I1210 07:27:03.174556    4804 system_pods.go:89] "kube-controller-manager-bridge-648600" [f0f8b824-6039-4158-99c3-0fbbb537fb95] Running
	I1210 07:27:03.174556    4804 system_pods.go:89] "kube-proxy-rvxdz" [0b48ba7e-6574-4dbf-9a6f-0cf235bd2b59] Running
	I1210 07:27:03.174556    4804 system_pods.go:89] "kube-scheduler-bridge-648600" [2fa395e0-512f-44f2-a6e6-d31024c11c77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:27:03.174556    4804 system_pods.go:89] "storage-provisioner" [f3e50e94-3f5d-4c96-9aec-ba2e61f60ec0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:27:03.174636    4804 system_pods.go:126] duration metric: took 1.6586915s to wait for k8s-apps to be running ...
	I1210 07:27:03.174636    4804 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 07:27:03.180714    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:27:03.200565    4804 system_svc.go:56] duration metric: took 25.9286ms WaitForService to wait for kubelet
	I1210 07:27:03.200565    4804 kubeadm.go:587] duration metric: took 3.9237365s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:27:03.200565    4804 node_conditions.go:102] verifying NodePressure condition ...
	I1210 07:27:03.206647    4804 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1210 07:27:03.206647    4804 node_conditions.go:123] node cpu capacity is 16
	I1210 07:27:03.206647    4804 node_conditions.go:105] duration metric: took 6.0818ms to run NodePressure ...
	I1210 07:27:03.206647    4804 start.go:242] waiting for startup goroutines ...
	I1210 07:27:03.206647    4804 start.go:247] waiting for cluster config update ...
	I1210 07:27:03.206647    4804 start.go:256] writing updated cluster config ...
	I1210 07:27:03.211392    4804 ssh_runner.go:195] Run: rm -f paused
	I1210 07:27:03.219165    4804 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:27:03.225327    4804 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-drdxd" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:03.829804    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:27:03.853756    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 07:27:03.878736    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:27:03.906508    8148 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 07:27:04.058384    8148 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 07:27:04.211214    8148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:27:04.351382    8148 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 07:27:04.377872    8148 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 07:27:04.403396    8148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:27:04.549162    8148 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 07:27:04.679128    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:27:04.700666    8148 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 07:27:04.707017    8148 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 07:27:04.714385    8148 start.go:564] Will wait 60s for crictl version
	I1210 07:27:04.718390    8148 ssh_runner.go:195] Run: which crictl
	I1210 07:27:04.728384    8148 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:27:04.771885    8148 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 07:27:04.775508    8148 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:27:04.821682    8148 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:27:04.862789    8148 out.go:252] * Preparing Kubernetes v1.34.3 on Docker 29.1.2 ...
	I1210 07:27:04.865761    8148 cli_runner.go:164] Run: docker exec -t kubenet-648600 dig +short host.docker.internal
	I1210 07:27:05.001510    8148 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 07:27:05.005662    8148 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 07:27:05.015254    8148 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:27:05.035518    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:27:05.091651    8148 kubeadm.go:884] updating cluster {Name:kubenet-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kubenet-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:27:05.092335    8148 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:27:05.098114    8148 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 07:27:05.143060    8148 docker.go:691] Got preloaded images: 
	I1210 07:27:05.143094    8148 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.3 wasn't preloaded
	I1210 07:27:05.143094    8148 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.3 registry.k8s.io/kube-controller-manager:v1.34.3 registry.k8s.io/kube-scheduler:v1.34.3 registry.k8s.io/kube-proxy:v1.34.3 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 07:27:05.156578    8148 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:27:05.159580    8148 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:27:05.163577    8148 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 07:27:05.164593    8148 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:27:05.167594    8148 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:27:05.168579    8148 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:27:05.172590    8148 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:27:05.172590    8148 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 07:27:05.176595    8148 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:27:05.176595    8148 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:27:05.180596    8148 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:27:05.180596    8148 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:27:05.185592    8148 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:27:05.185592    8148 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:27:05.187578    8148 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:27:05.192578    8148 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	W1210 07:27:05.220575    8148 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:27:05.269574    8148 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:27:05.317615    8148 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:27:05.371485    8148 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:27:05.423045    8148 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.12.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:27:05.475036    8148 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:27:05.525315    8148 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:27:05.575962    8148 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 07:27:05.683131    8148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:27:05.684679    8148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:27:05.686876    8148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 07:27:05.715167    8148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:27:05.726748    8148 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.3" does not exist at hash "5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942" in container runtime
	I1210 07:27:05.726748    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:27:05.726748    8148 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.3" does not exist at hash "aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c" in container runtime
	I1210 07:27:05.726748    8148 docker.go:338] Removing image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:27:05.726748    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:27:05.726748    8148 docker.go:338] Removing image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:27:05.733086    8148 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 07:27:05.733086    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:27:05.733086    8148 docker.go:338] Removing image: registry.k8s.io/pause:3.10.1
	I1210 07:27:05.733086    8148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:27:05.733086    8148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:27:05.736695    8148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1210 07:27:05.743504    8148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 07:27:05.780383    8148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:27:05.786007    8148 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1210 07:27:05.786007    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:27:05.786007    8148 docker.go:338] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:27:05.791803    8148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:27:05.797754    8148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:27:05.869665    8148 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:27:05.869665    8148 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:27:05.869665    8148 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:27:05.878211    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:27:05.879080    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:27:05.879124    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 07:27:05.884965    8148 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 07:27:05.884965    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:27:05.885022    8148 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:27:05.891293    8148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:27:05.910745    8148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:27:05.964950    8148 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.3" does not exist at hash "aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78" in container runtime
	I1210 07:27:05.965026    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:27:05.965105    8148 docker.go:338] Removing image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:27:05.971845    8148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:27:05.982670    8148 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.3" needs transfer: "registry.k8s.io/kube-proxy:v1.34.3" does not exist at hash "36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691" in container runtime
	I1210 07:27:05.982670    8148 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.3': No such file or directory
	I1210 07:27:05.982670    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:27:05.982670    8148 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:27:05.982670    8148 docker.go:338] Removing image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:27:05.982670    8148 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.3': No such file or directory
	I1210 07:27:05.982670    8148 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 07:27:05.982670    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 --> /var/lib/minikube/images/kube-apiserver_v1.34.3 (27075584 bytes)
	I1210 07:27:05.982670    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 --> /var/lib/minikube/images/kube-controller-manager_v1.34.3 (22830080 bytes)
	I1210 07:27:05.982670    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 07:27:05.987758    8148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:27:05.989944    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:27:06.087065    8148 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 07:27:06.087065    8148 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:27:06.087065    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:27:06.087065    8148 docker.go:338] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:27:06.090779    8148 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:27:06.094460    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:27:06.177286    8148 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:27:06.177286    8148 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1210 07:27:06.177286    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1210 07:27:06.182289    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:27:06.201814    8148 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:27:06.205794    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:27:06.214794    8148 docker.go:305] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 07:27:06.214794    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1210 07:27:06.387866    8148 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 07:27:06.387866    8148 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:27:06.387866    8148 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.3': No such file or directory
	I1210 07:27:06.387866    8148 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.3': No such file or directory
	I1210 07:27:06.387866    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 07:27:06.387866    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 --> /var/lib/minikube/images/kube-proxy_v1.34.3 (25966592 bytes)
	I1210 07:27:06.387866    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 --> /var/lib/minikube/images/kube-scheduler_v1.34.3 (17393664 bytes)
	I1210 07:27:06.394865    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:27:06.469869    8148 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 from cache
	I1210 07:27:06.469869    8148 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 07:27:06.470873    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 07:27:07.386922    8148 docker.go:305] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:27:07.386922    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1210 07:27:08.295994    8148 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I1210 07:27:08.295994    8148 docker.go:305] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:27:08.295994    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.34.3 | docker load"
	W1210 07:27:05.249574    4804 pod_ready.go:104] pod "coredns-66bc5c9577-drdxd" is not "Ready", error: <nil>
	W1210 07:27:07.251932    4804 pod_ready.go:104] pod "coredns-66bc5c9577-drdxd" is not "Ready", error: <nil>
	I1210 07:27:10.869615    8148 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.34.3 | docker load": (2.5735513s)
	I1210 07:27:10.869615    8148 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 from cache
	I1210 07:27:10.869615    8148 docker.go:305] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:27:10.869615    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.34.3 | docker load"
	I1210 07:27:12.142371    8148 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.34.3 | docker load": (1.272692s)
	I1210 07:27:12.142438    8148 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 from cache
	I1210 07:27:12.142487    8148 docker.go:305] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:27:12.142539    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.12.1 | docker load"
	W1210 07:27:09.737974    4804 pod_ready.go:104] pod "coredns-66bc5c9577-drdxd" is not "Ready", error: <nil>
	W1210 07:27:11.751244    4804 pod_ready.go:104] pod "coredns-66bc5c9577-drdxd" is not "Ready", error: <nil>
	I1210 07:27:17.659733    8148 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.12.1 | docker load": (5.5170523s)
	I1210 07:27:17.659733    8148 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 from cache
	I1210 07:27:17.659733    8148 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:27:17.659733    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	W1210 07:27:14.889752    4804 pod_ready.go:104] pod "coredns-66bc5c9577-drdxd" is not "Ready", error: <nil>
	I1210 07:27:16.231616    4804 pod_ready.go:99] pod "coredns-66bc5c9577-drdxd" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-drdxd" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-drdxd" not found
	I1210 07:27:16.231616    4804 pod_ready.go:86] duration metric: took 13.0060841s for pod "coredns-66bc5c9577-drdxd" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:16.231616    4804 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w2ff8" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 07:27:18.243104    4804 pod_ready.go:104] pod "coredns-66bc5c9577-w2ff8" is not "Ready", error: <nil>
	I1210 07:27:20.501640    8148 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (2.8418624s)
	I1210 07:27:20.501640    8148 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1210 07:27:20.501640    8148 docker.go:305] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:27:20.501640    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.34.3 | docker load"
	I1210 07:27:21.955780    8148 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.34.3 | docker load": (1.4541176s)
	I1210 07:27:21.955780    8148 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 from cache
	I1210 07:27:21.955780    8148 docker.go:305] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:27:21.955780    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.34.3 | docker load"
	W1210 07:27:20.244187    4804 pod_ready.go:104] pod "coredns-66bc5c9577-w2ff8" is not "Ready", error: <nil>
	W1210 07:27:22.743434    4804 pod_ready.go:104] pod "coredns-66bc5c9577-w2ff8" is not "Ready", error: <nil>
	I1210 07:27:24.316577    8148 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.34.3 | docker load": (2.3607598s)
	I1210 07:27:24.316577    8148 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 from cache
	I1210 07:27:24.316577    8148 cache_images.go:125] Successfully loaded all cached images
	I1210 07:27:24.316577    8148 cache_images.go:94] duration metric: took 19.1731423s to LoadCachedImages
	I1210 07:27:24.316577    8148 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 docker true true} ...
	I1210 07:27:24.316577    8148 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubenet-648600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:kubenet-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:27:24.321252    8148 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 07:27:24.396301    8148 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1210 07:27:24.396301    8148 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:27:24.396301    8148 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-648600 NodeName:kubenet-648600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:27:24.396301    8148 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-648600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:27:24.400789    8148 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 07:27:24.413615    8148 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.3': No such file or directory
	
	Initiating transfer...
	I1210 07:27:24.420562    8148 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.3
	I1210 07:27:24.433705    8148 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 07:27:24.433705    8148 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet.sha256
	I1210 07:27:24.433705    8148 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
	I1210 07:27:24.439326    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:27:24.439990    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm
	I1210 07:27:24.440097    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl
	I1210 07:27:24.459701    8148 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubectl': No such file or directory
	I1210 07:27:24.459701    8148 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubeadm': No such file or directory
	I1210 07:27:24.459701    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubectl --> /var/lib/minikube/binaries/v1.34.3/kubectl (60563640 bytes)
	I1210 07:27:24.459701    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubeadm --> /var/lib/minikube/binaries/v1.34.3/kubeadm (74027192 bytes)
	I1210 07:27:24.464028    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet
	I1210 07:27:24.478444    8148 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubelet': No such file or directory
	I1210 07:27:24.478444    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubelet --> /var/lib/minikube/binaries/v1.34.3/kubelet (59203876 bytes)
	I1210 07:27:26.366215    8148 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:27:26.381210    8148 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (338 bytes)
	I1210 07:27:26.402108    8148 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 07:27:26.421679    8148 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1210 07:27:26.446789    8148 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:27:26.453858    8148 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:27:26.473492    8148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:27:26.611727    8148 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:27:26.633782    8148 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600 for IP: 192.168.76.2
	I1210 07:27:26.633782    8148 certs.go:195] generating shared ca certs ...
	I1210 07:27:26.633782    8148 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:27:26.634686    8148 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 07:27:26.634965    8148 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 07:27:26.635033    8148 certs.go:257] generating profile certs ...
	I1210 07:27:26.635033    8148 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\client.key
	I1210 07:27:26.635033    8148 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\client.crt with IP's: []
	I1210 07:27:26.716276    8148 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\client.crt ...
	I1210 07:27:26.716276    8148 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\client.crt: {Name:mk02489e14eca5a7daf32070f5a9d62031c71ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:27:26.717274    8148 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\client.key ...
	I1210 07:27:26.717274    8148 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\client.key: {Name:mkeee5be306abd033b56aba0cd7f1437696b5d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:27:26.718610    8148 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.key.785db377
	I1210 07:27:26.719151    8148 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.crt.785db377 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 07:27:26.796439    8148 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.crt.785db377 ...
	I1210 07:27:26.796439    8148 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.crt.785db377: {Name:mk334d88b1581e29df7bfa117bfc64a88d82a6f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:27:26.797603    8148 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.key.785db377 ...
	I1210 07:27:26.797603    8148 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.key.785db377: {Name:mkb37914eaff81ec29f8166cb1744ed358d062f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:27:26.798758    8148 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.crt.785db377 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.crt
	I1210 07:27:26.812046    8148 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.key.785db377 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.key
	I1210 07:27:26.812968    8148 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\proxy-client.key
	I1210 07:27:26.812968    8148 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\proxy-client.crt with IP's: []
	I1210 07:27:26.850220    8148 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\proxy-client.crt ...
	I1210 07:27:26.850220    8148 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\proxy-client.crt: {Name:mke528aabf4c458c4ee7e7f83cf38c91aa7bd3e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:27:26.851417    8148 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\proxy-client.key ...
	I1210 07:27:26.851417    8148 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\proxy-client.key: {Name:mk77fbad064846c54863bab29158ecadc03ea553 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:27:26.864338    8148 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 07:27:26.864780    8148 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 07:27:26.864780    8148 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 07:27:26.864780    8148 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 07:27:26.864780    8148 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 07:27:26.865410    8148 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 07:27:26.865501    8148 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 07:27:26.866122    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:27:26.900446    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:27:26.930968    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:27:26.958200    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:27:26.988784    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 07:27:27.022629    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:27:27.056583    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:27:27.085155    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:27:27.115053    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 07:27:27.150696    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:27:27.185123    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 07:27:27.216567    8148 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:27:27.250850    8148 ssh_runner.go:195] Run: openssl version
	I1210 07:27:27.267552    8148 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 07:27:27.288656    8148 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 07:27:27.305368    8148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 07:27:27.314528    8148 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 07:27:27.320508    8148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 07:27:27.367783    8148 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:27:27.385141    8148 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/113042.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:27:27.404873    8148 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:27:27.425284    8148 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:27:27.443965    8148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:27:27.454282    8148 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:27:27.458396    8148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:27:27.506425    8148 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:27:27.526530    8148 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:27:27.543298    8148 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 07:27:27.561927    8148 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 07:27:27.581244    8148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 07:27:27.590435    8148 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 07:27:27.594883    8148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 07:27:27.642220    8148 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:27:27.663487    8148 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11304.pem /etc/ssl/certs/51391683.0
	I1210 07:27:27.686581    8148 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:27:27.694193    8148 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:27:27.694507    8148 kubeadm.go:401] StartCluster: {Name:kubenet-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kubenet-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:27:27.698413    8148 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 07:27:27.737485    8148 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:27:27.756360    8148 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:27:27.771482    8148 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:27:27.775635    8148 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:27:27.789286    8148 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:27:27.789286    8148 kubeadm.go:158] found existing configuration files:
	
	I1210 07:27:27.794294    8148 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:27:27.808089    8148 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:27:27.812410    8148 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:27:27.832458    8148 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:27:27.846970    8148 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:27:27.850863    8148 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:27:27.868609    8148 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:27:27.885013    8148 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:27:27.891495    8148 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:27:27.909197    8148 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:27:27.922667    8148 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:27:27.927530    8148 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:27:27.944790    8148 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:27:28.058922    8148 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 07:27:28.064312    8148 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1210 07:27:28.172527    8148 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1210 07:27:24.746112    4804 pod_ready.go:104] pod "coredns-66bc5c9577-w2ff8" is not "Ready", error: <nil>
	I1210 07:27:27.243122    4804 pod_ready.go:94] pod "coredns-66bc5c9577-w2ff8" is "Ready"
	I1210 07:27:27.243148    4804 pod_ready.go:86] duration metric: took 11.0113593s for pod "coredns-66bc5c9577-w2ff8" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:27.250850    4804 pod_ready.go:83] waiting for pod "etcd-bridge-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:27.259311    4804 pod_ready.go:94] pod "etcd-bridge-648600" is "Ready"
	I1210 07:27:27.259351    4804 pod_ready.go:86] duration metric: took 8.501ms for pod "etcd-bridge-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:27.264046    4804 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:27.273922    4804 pod_ready.go:94] pod "kube-apiserver-bridge-648600" is "Ready"
	I1210 07:27:27.273922    4804 pod_ready.go:86] duration metric: took 9.8554ms for pod "kube-apiserver-bridge-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:27.278592    4804 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:27.437083    4804 pod_ready.go:94] pod "kube-controller-manager-bridge-648600" is "Ready"
	I1210 07:27:27.437083    4804 pod_ready.go:86] duration metric: took 158.4885ms for pod "kube-controller-manager-bridge-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:27.639502    4804 pod_ready.go:83] waiting for pod "kube-proxy-rvxdz" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:28.037381    4804 pod_ready.go:94] pod "kube-proxy-rvxdz" is "Ready"
	I1210 07:27:28.037381    4804 pod_ready.go:86] duration metric: took 397.7826ms for pod "kube-proxy-rvxdz" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:28.237745    4804 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:28.637206    4804 pod_ready.go:94] pod "kube-scheduler-bridge-648600" is "Ready"
	I1210 07:27:28.637301    4804 pod_ready.go:86] duration metric: took 399.4761ms for pod "kube-scheduler-bridge-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:28.637301    4804 pod_ready.go:40] duration metric: took 25.4177367s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:27:28.739566    4804 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 07:27:28.742299    4804 out.go:179] * Done! kubectl is now configured to use "bridge-648600" cluster and "default" namespace by default
	I1210 07:27:30.845427    6232 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 07:27:30.845427    6232 kubeadm.go:319] 
	I1210 07:27:30.846026    6232 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:27:30.849126    6232 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 07:27:30.849126    6232 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:27:30.849126    6232 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:27:30.849730    6232 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 07:27:30.849899    6232 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 07:27:30.850054    6232 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 07:27:30.850170    6232 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 07:27:30.850377    6232 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 07:27:30.850502    6232 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 07:27:30.850528    6232 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 07:27:30.850528    6232 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 07:27:30.850528    6232 kubeadm.go:319] CONFIG_INET: enabled
	I1210 07:27:30.850528    6232 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 07:27:30.850528    6232 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 07:27:30.851207    6232 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 07:27:30.851387    6232 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 07:27:30.851479    6232 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 07:27:30.851479    6232 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 07:27:30.851479    6232 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 07:27:30.851479    6232 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 07:27:30.852012    6232 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 07:27:30.852150    6232 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 07:27:30.852150    6232 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 07:27:30.852150    6232 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 07:27:30.852150    6232 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 07:27:30.852734    6232 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 07:27:30.852734    6232 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 07:27:30.852734    6232 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 07:27:30.852734    6232 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 07:27:30.852734    6232 kubeadm.go:319] OS: Linux
	I1210 07:27:30.852734    6232 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:27:30.853345    6232 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:27:30.853498    6232 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:27:30.853705    6232 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:27:30.853932    6232 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:27:30.854096    6232 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:27:30.854096    6232 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:27:30.854096    6232 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:27:30.854096    6232 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:27:30.854096    6232 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:27:30.854761    6232 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:27:30.855081    6232 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:27:30.855238    6232 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:27:32.136934    6232 out.go:252]   - Generating certificates and keys ...
	I1210 07:27:32.137702    6232 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:27:32.137951    6232 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:27:32.138057    6232 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:27:32.138229    6232 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:27:32.138420    6232 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:27:32.138420    6232 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:27:32.138420    6232 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:27:32.138420    6232 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:27:32.138420    6232 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:27:32.138953    6232 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:27:32.139119    6232 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:27:32.139293    6232 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:27:32.139454    6232 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:27:32.139561    6232 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:27:32.139676    6232 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:27:32.139890    6232 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:27:32.139890    6232 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:27:32.139890    6232 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:27:32.139890    6232 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:27:32.176956    6232 out.go:252]   - Booting up control plane ...
	I1210 07:27:32.177131    6232 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:27:32.177131    6232 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:27:32.177131    6232 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:27:32.177675    6232 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:27:32.177887    6232 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:27:32.177947    6232 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:27:32.177947    6232 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:27:32.177947    6232 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:27:32.178633    6232 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:27:32.178747    6232 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:27:32.178747    6232 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00091283s
	I1210 07:27:32.178747    6232 kubeadm.go:319] 
	I1210 07:27:32.178747    6232 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:27:32.179272    6232 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:27:32.179465    6232 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:27:32.179465    6232 kubeadm.go:319] 
	I1210 07:27:32.180034    6232 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:27:32.180034    6232 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:27:32.180034    6232 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:27:32.180034    6232 kubeadm.go:319] 
	I1210 07:27:32.180034    6232 kubeadm.go:403] duration metric: took 8m5.1768914s to StartCluster
	I1210 07:27:32.180034    6232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:27:32.184805    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:27:32.252290    6232 cri.go:89] found id: ""
	I1210 07:27:32.252290    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.252290    6232 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:27:32.252290    6232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:27:32.257295    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:27:32.524390    6232 cri.go:89] found id: ""
	I1210 07:27:32.524390    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.524390    6232 logs.go:284] No container was found matching "etcd"
	I1210 07:27:32.524390    6232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:27:32.529570    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:27:32.574711    6232 cri.go:89] found id: ""
	I1210 07:27:32.574765    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.574765    6232 logs.go:284] No container was found matching "coredns"
	I1210 07:27:32.574765    6232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:27:32.579249    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:27:32.620467    6232 cri.go:89] found id: ""
	I1210 07:27:32.620543    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.620543    6232 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:27:32.620543    6232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:27:32.624698    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:27:32.678505    6232 cri.go:89] found id: ""
	I1210 07:27:32.678505    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.678505    6232 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:27:32.678505    6232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:27:32.683647    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:27:32.734494    6232 cri.go:89] found id: ""
	I1210 07:27:32.734494    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.734494    6232 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:27:32.734494    6232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:27:32.740109    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:27:32.782096    6232 cri.go:89] found id: ""
	I1210 07:27:32.782096    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.782096    6232 logs.go:284] No container was found matching "kindnet"
	I1210 07:27:32.782096    6232 logs.go:123] Gathering logs for kubelet ...
	I1210 07:27:32.782096    6232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:27:32.848542    6232 logs.go:123] Gathering logs for dmesg ...
	I1210 07:27:32.848542    6232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:27:32.887692    6232 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:27:32.887692    6232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:27:32.974167    6232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:27:32.961911   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.962935   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.963846   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.967478   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.968591   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:27:32.961911   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.962935   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.963846   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.967478   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.968591   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:27:32.974167    6232 logs.go:123] Gathering logs for Docker ...
	I1210 07:27:32.974167    6232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:27:33.009144    6232 logs.go:123] Gathering logs for container status ...
	I1210 07:27:33.009144    6232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:27:33.065279    6232 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00091283s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:27:33.065279    6232 out.go:285] * 
	W1210 07:27:33.065279    6232 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00091283s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:27:33.065279    6232 out.go:285] * 
	W1210 07:27:33.067510    6232 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:27:33.666818    6232 out.go:203] 
	W1210 07:27:33.825573    6232 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00091283s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:27:33.825573    6232 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:27:33.825573    6232 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:27:33.873675    6232 out.go:203] 
	
	
	==> Docker <==
	Dec 10 07:19:16 newest-cni-525200 systemd[1]: Starting docker.service - Docker Application Container Engine...
	Dec 10 07:19:16 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:16.062791191Z" level=info msg="Starting up"
	Dec 10 07:19:16 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:16.084601896Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	Dec 10 07:19:16 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:16.084748710Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	Dec 10 07:19:16 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:16.084762511Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	Dec 10 07:19:16 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:16.101611637Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	Dec 10 07:19:16 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:16.245015073Z" level=info msg="Loading containers: start."
	Dec 10 07:19:16 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:16.245162987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.400213681Z" level=info msg="Restoring containers: start."
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.481783615Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.531401619Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.874477245Z" level=info msg="Loading containers: done."
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.923622004Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.923705712Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.923715913Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.923722613Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.923729214Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.923757017Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.923825523Z" level=info msg="Initializing buildkit"
	Dec 10 07:19:23 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:23.052360909Z" level=info msg="Completed buildkit initialization"
	Dec 10 07:19:23 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:23.059794414Z" level=info msg="Daemon has completed initialization"
	Dec 10 07:19:23 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:23.060067240Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 07:19:23 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:23.060194252Z" level=info msg="API listen on [::]:2376"
	Dec 10 07:19:23 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:23.060089142Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 07:19:23 newest-cni-525200 systemd[1]: Started docker.service - Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:27:36.132999   10834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:36.134262   10834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:36.136591   10834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:36.139450   10834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:36.141213   10834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 07:27] CPU: 4 PID: 450214 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fc67e5a5b20
	[  +0.000006] Code: Unable to access opcode bytes at RIP 0x7fc67e5a5af6.
	[  +0.000001] RSP: 002b:00007fffb6f4ee10 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.822699] CPU: 6 PID: 450591 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7fa8d5a60b20
	[  +0.000006] Code: Unable to access opcode bytes at RIP 0x7fa8d5a60af6.
	[  +0.000001] RSP: 002b:00007ffd78f04e50 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[ +31.850979] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 07:27:36 up  2:55,  0 user,  load average: 4.46, 5.29, 4.78
	Linux newest-cni-525200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:27:33 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:27:33 newest-cni-525200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 10 07:27:33 newest-cni-525200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:27:33 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:27:33 newest-cni-525200 kubelet[10684]: E1210 07:27:33.942075   10684 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:27:33 newest-cni-525200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:27:33 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:27:34 newest-cni-525200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 10 07:27:34 newest-cni-525200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:27:34 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:27:34 newest-cni-525200 kubelet[10695]: E1210 07:27:34.697777   10695 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:27:34 newest-cni-525200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:27:34 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:27:35 newest-cni-525200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 10 07:27:35 newest-cni-525200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:27:35 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:27:35 newest-cni-525200 kubelet[10723]: E1210 07:27:35.462516   10723 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:27:35 newest-cni-525200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:27:35 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:27:36 newest-cni-525200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 10 07:27:36 newest-cni-525200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:27:36 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:27:36 newest-cni-525200 kubelet[10843]: E1210 07:27:36.213072   10843 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:27:36 newest-cni-525200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:27:36 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-525200 -n newest-cni-525200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-525200 -n newest-cni-525200: exit status 6 (704.9878ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:27:37.552141    9400 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-525200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-525200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (543.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (5.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-099700 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-099700 create -f testdata\busybox.yaml: exit status 1 (93.4058ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-099700" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-099700 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-099700
helpers_test.go:244: (dbg) docker inspect no-preload-099700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11",
	        "Created": "2025-12-10T07:17:13.908925425Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 372361,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:17:16.221120749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/hostname",
	        "HostsPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/hosts",
	        "LogPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11-json.log",
	        "Name": "/no-preload-099700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-099700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-099700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-099700",
	                "Source": "/var/lib/docker/volumes/no-preload-099700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-099700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-099700",
	                "name.minikube.sigs.k8s.io": "no-preload-099700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "19d075be822285a6bc04718614fae0d1e6b527c2b7b973ed840dd03da78703c1",
	            "SandboxKey": "/var/run/docker/netns/19d075be8222",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56157"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56153"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56154"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56155"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56156"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-099700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "19fb5b7ebc44993ca33ebb33ab9b189e482cb385e465c509a613326e2c10eb7e",
	                    "EndpointID": "bd211c76c769a23696ddb9b2e4a3cd1f6c2388bff504ec060a8ffe809e64dcb5",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-099700",
	                        "a93123bad589"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-099700 -n no-preload-099700
E1210 07:26:11.448791   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-099700 -n no-preload-099700: exit status 6 (577.9933ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:26:11.795444   10516 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-099700" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-099700 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-099700 logs -n 25: (1.0981433s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                     │          PROFILE          │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-648600 sudo systemctl cat kubelet --no-pager                                                      │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo journalctl -xeu kubelet --all --full --no-pager                                       │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /etc/kubernetes/kubelet.conf                                                      │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /var/lib/kubelet/config.yaml                                                      │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl status docker --all --full --no-pager                                       │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl cat docker --no-pager                                                       │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /etc/docker/daemon.json                                                           │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo docker system info                                                                    │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl status cri-docker --all --full --no-pager                                   │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl cat cri-docker --no-pager                                                   │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                              │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /usr/lib/systemd/system/cri-docker.service                                        │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cri-dockerd --version                                                                 │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl status containerd --all --full --no-pager                                   │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl cat containerd --no-pager                                                   │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /lib/systemd/system/containerd.service                                            │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /etc/containerd/config.toml                                                       │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo containerd config dump                                                                │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl status crio --all --full --no-pager                                         │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │                     │
	│ ssh     │ -p flannel-648600 sudo systemctl cat crio --no-pager                                                         │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                               │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo crio config                                                                           │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ delete  │ -p flannel-648600                                                                                            │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ start   │ -p bridge-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker │ bridge-648600             │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │                     │
	│ ssh     │ -p enable-default-cni-648600 pgrep -a kubelet                                                                │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:25:49
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:25:49.543159    4804 out.go:360] Setting OutFile to fd 1260 ...
	I1210 07:25:49.586332    4804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:25:49.586332    4804 out.go:374] Setting ErrFile to fd 812...
	I1210 07:25:49.586377    4804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:25:49.601444    4804 out.go:368] Setting JSON to false
	I1210 07:25:49.603301    4804 start.go:133] hostinfo: {"hostname":"minikube4","uptime":10481,"bootTime":1765341068,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 07:25:49.603301    4804 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 07:25:49.607247    4804 out.go:179] * [bridge-648600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 07:25:49.610906    4804 notify.go:221] Checking for updates...
	I1210 07:25:49.613490    4804 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:25:49.615618    4804 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:25:49.617459    4804 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 07:25:49.620105    4804 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:25:49.622698    4804 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1210 07:25:47.110970   10052 pod_ready.go:104] pod "coredns-66bc5c9577-snb42" is not "Ready", error: <nil>
	W1210 07:25:49.622010   10052 pod_ready.go:104] pod "coredns-66bc5c9577-snb42" is not "Ready", error: <nil>
	I1210 07:25:49.625061    4804 config.go:182] Loaded profile config "enable-default-cni-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:25:49.625872    4804 config.go:182] Loaded profile config "newest-cni-525200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:25:49.626037    4804 config.go:182] Loaded profile config "no-preload-099700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:25:49.626037    4804 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:25:49.756585    4804 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 07:25:49.760223    4804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:25:49.995247    4804 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:25:49.978486557 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:25:49.998261    4804 out.go:179] * Using the docker driver based on user configuration
	I1210 07:25:50.001264    4804 start.go:309] selected driver: docker
	I1210 07:25:50.002267    4804 start.go:927] validating driver "docker" against <nil>
	I1210 07:25:50.002267    4804 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:25:50.087841    4804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:25:50.326740    4804 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:25:50.304007932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:25:50.326740    4804 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:25:50.328404    4804 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:25:50.338396    4804 out.go:179] * Using Docker Desktop driver with root privileges
	I1210 07:25:50.340335    4804 cni.go:84] Creating CNI manager for "bridge"
	I1210 07:25:50.340335    4804 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 07:25:50.340335    4804 start.go:353] cluster config:
	{Name:bridge-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:bridge-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:25:50.343283    4804 out.go:179] * Starting "bridge-648600" primary control-plane node in "bridge-648600" cluster
	I1210 07:25:50.346532    4804 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 07:25:50.348744    4804 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:25:50.351187    4804 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:25:50.351187    4804 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 07:25:50.394442    4804 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:25:50.434159    4804 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:25:50.434159    4804 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:25:50.622000    4804 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:25:50.622000    4804 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-648600\config.json ...
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:25:50.622000    4804 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-648600\config.json: {Name:mkda6ce656f671ed6502f97ceabe139018dc3485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:25:50.623233    4804 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:25:50.623233    4804 start.go:360] acquireMachinesLock for bridge-648600: {Name:mk22986727a0b030c8919e2ba8ce1cc03f255d27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:50.623828    4804 start.go:364] duration metric: took 594.9µs to acquireMachinesLock for "bridge-648600"
	I1210 07:25:50.624001    4804 start.go:93] Provisioning new machine with config: &{Name:bridge-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:bridge-648600 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:25:50.624086    4804 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:25:50.630473    4804 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:25:50.631176    4804 start.go:159] libmachine.API.Create for "bridge-648600" (driver="docker")
	I1210 07:25:50.631275    4804 client.go:173] LocalClient.Create starting
	I1210 07:25:50.631359    4804 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1210 07:25:50.631951    4804 main.go:143] libmachine: Decoding PEM data...
	I1210 07:25:50.631985    4804 main.go:143] libmachine: Parsing certificate...
	I1210 07:25:50.632047    4804 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1210 07:25:50.632047    4804 main.go:143] libmachine: Decoding PEM data...
	I1210 07:25:50.632047    4804 main.go:143] libmachine: Parsing certificate...
	I1210 07:25:50.637892    4804 cli_runner.go:164] Run: docker network inspect bridge-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:25:50.766370    4804 cli_runner.go:211] docker network inspect bridge-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:25:50.773371    4804 network_create.go:284] running [docker network inspect bridge-648600] to gather additional debugging logs...
	I1210 07:25:50.773371    4804 cli_runner.go:164] Run: docker network inspect bridge-648600
	W1210 07:25:50.940187    4804 cli_runner.go:211] docker network inspect bridge-648600 returned with exit code 1
	I1210 07:25:50.940187    4804 network_create.go:287] error running [docker network inspect bridge-648600]: docker network inspect bridge-648600: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network bridge-648600 not found
	I1210 07:25:50.940187    4804 network_create.go:289] output of [docker network inspect bridge-648600]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network bridge-648600 not found
	
	** /stderr **
	I1210 07:25:50.943184    4804 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:25:51.023198    4804 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.067123    4804 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.118311    4804 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.415711    4804 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.593726    4804 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.749810    4804 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.785598    4804 network.go:209] skipping subnet 192.168.103.0/24 that is reserved: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.819602    4804 network.go:206] using free private subnet 192.168.112.0/24: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cc7020}
	I1210 07:25:51.819602    4804 network_create.go:124] attempt to create docker network bridge-648600 192.168.112.0/24 with gateway 192.168.112.1 and MTU of 1500 ...
	I1210 07:25:51.824606    4804 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.112.0/24 --gateway=192.168.112.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-648600 bridge-648600
	I1210 07:25:52.457680    4804 network_create.go:108] docker network bridge-648600 192.168.112.0/24 created
	I1210 07:25:52.457680    4804 kic.go:121] calculated static IP "192.168.112.2" for the "bridge-648600" container
	I1210 07:25:52.468991    4804 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:25:52.567040    4804 cli_runner.go:164] Run: docker volume create bridge-648600 --label name.minikube.sigs.k8s.io=bridge-648600 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:25:52.654855    4804 oci.go:103] Successfully created a docker volume bridge-648600
	I1210 07:25:52.661290    4804 cli_runner.go:164] Run: docker run --rm --name bridge-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-648600 --entrypoint /usr/bin/test -v bridge-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 07:25:53.606607    4804 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.606650    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1210 07:25:53.606650    4804 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 2.9846033s
	I1210 07:25:53.606650    4804 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1210 07:25:53.611255    4804 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.611255    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1210 07:25:53.611255    4804 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 2.9892082s
	I1210 07:25:53.611255    4804 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1210 07:25:53.618257    4804 cache.go:107] acquiring lock: {Name:mk4347e5873954a8abe84535c3dbf0018de433f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.618257    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 exists
	I1210 07:25:53.618257    4804 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.34.3" took 2.9962103s
	I1210 07:25:53.618257    4804 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 succeeded
	I1210 07:25:53.622270    4804 cache.go:107] acquiring lock: {Name:mk6b392ee3857c8c549222e5d4bc0d459f0d6374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.622270    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 exists
	I1210 07:25:53.623277    4804 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.12.1" took 3.0012304s
	I1210 07:25:53.623277    4804 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 succeeded
	I1210 07:25:53.639040    4804 cache.go:107] acquiring lock: {Name:mk712909f8cebe825fb8a435a44c90c8d2ca7c32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.639270    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 exists
	I1210 07:25:53.639270    4804 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.34.3" took 3.0172233s
	I1210 07:25:53.639270    4804 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 succeeded
	I1210 07:25:53.654496    4804 cache.go:107] acquiring lock: {Name:mkc3ba12d2dc6e754bbb22b72e061ebdaf85fee6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.654560    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 exists
	I1210 07:25:53.654560    4804 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.34.3" took 3.0325133s
	I1210 07:25:53.654560    4804 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 succeeded
	I1210 07:25:53.657375    4804 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.657375    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1210 07:25:53.657375    4804 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.0353279s
	I1210 07:25:53.657375    4804 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1210 07:25:53.721903    4804 cache.go:107] acquiring lock: {Name:mk2c17fa70a505b9214cef4e32cab64efc0821c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.722919    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 exists
	I1210 07:25:53.722919    4804 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.34.3" took 3.1008707s
	I1210 07:25:53.722919    4804 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 succeeded
	I1210 07:25:53.722919    4804 cache.go:87] Successfully saved all images to host disk.
	I1210 07:25:54.341687    4804 cli_runner.go:217] Completed: docker run --rm --name bridge-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-648600 --entrypoint /usr/bin/test -v bridge-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.6803432s)
	I1210 07:25:54.341687    4804 oci.go:107] Successfully prepared a docker volume bridge-648600
	I1210 07:25:54.341687    4804 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:25:54.345933    4804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	W1210 07:25:51.668965   10052 pod_ready.go:104] pod "coredns-66bc5c9577-snb42" is not "Ready", error: <nil>
	W1210 07:25:54.108193   10052 pod_ready.go:104] pod "coredns-66bc5c9577-snb42" is not "Ready", error: <nil>
	I1210 07:25:55.617983   10052 pod_ready.go:94] pod "coredns-66bc5c9577-snb42" is "Ready"
	I1210 07:25:55.617983   10052 pod_ready.go:86] duration metric: took 32.0203634s for pod "coredns-66bc5c9577-snb42" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.617983   10052 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z85xd" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.622647   10052 pod_ready.go:99] pod "coredns-66bc5c9577-z85xd" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-z85xd" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-z85xd" not found
	I1210 07:25:55.622692   10052 pod_ready.go:86] duration metric: took 4.7083ms for pod "coredns-66bc5c9577-z85xd" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.628956   10052 pod_ready.go:83] waiting for pod "etcd-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.641372   10052 pod_ready.go:94] pod "etcd-enable-default-cni-648600" is "Ready"
	I1210 07:25:55.641424   10052 pod_ready.go:86] duration metric: took 12.4205ms for pod "etcd-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.649373   10052 pod_ready.go:83] waiting for pod "kube-apiserver-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.660931   10052 pod_ready.go:94] pod "kube-apiserver-enable-default-cni-648600" is "Ready"
	I1210 07:25:55.660931   10052 pod_ready.go:86] duration metric: took 11.513ms for pod "kube-apiserver-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.665948   10052 pod_ready.go:83] waiting for pod "kube-controller-manager-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:56.004282   10052 pod_ready.go:94] pod "kube-controller-manager-enable-default-cni-648600" is "Ready"
	I1210 07:25:56.004282   10052 pod_ready.go:86] duration metric: took 338.3283ms for pod "kube-controller-manager-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:56.204210   10052 pod_ready.go:83] waiting for pod "kube-proxy-vbl22" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:56.604378   10052 pod_ready.go:94] pod "kube-proxy-vbl22" is "Ready"
	I1210 07:25:56.604904   10052 pod_ready.go:86] duration metric: took 400.6871ms for pod "kube-proxy-vbl22" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:56.803854   10052 pod_ready.go:83] waiting for pod "kube-scheduler-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:57.202693   10052 pod_ready.go:94] pod "kube-scheduler-enable-default-cni-648600" is "Ready"
	I1210 07:25:57.203218   10052 pod_ready.go:86] duration metric: took 399.2547ms for pod "kube-scheduler-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:57.203218   10052 pod_ready.go:40] duration metric: took 33.6115798s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:25:57.296715   10052 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 07:25:57.302628   10052 out.go:179] * Done! kubectl is now configured to use "enable-default-cni-648600" cluster and "default" namespace by default
	I1210 07:25:54.612936    4804 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:25:54.590365523 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:25:54.615934    4804 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:25:54.861212    4804 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-648600 --name bridge-648600 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-648600 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-648600 --network bridge-648600 --ip 192.168.112.2 --volume bridge-648600:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 07:25:55.596152    4804 cli_runner.go:164] Run: docker container inspect bridge-648600 --format={{.State.Running}}
	I1210 07:25:55.671931    4804 cli_runner.go:164] Run: docker container inspect bridge-648600 --format={{.State.Status}}
	I1210 07:25:55.726932    4804 cli_runner.go:164] Run: docker exec bridge-648600 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:25:55.835842    4804 oci.go:144] the created container "bridge-648600" has a running status.
	I1210 07:25:55.835842    4804 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa...
	I1210 07:25:55.990727    4804 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:25:56.069551    4804 cli_runner.go:164] Run: docker container inspect bridge-648600 --format={{.State.Status}}
	I1210 07:25:56.135549    4804 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:25:56.135549    4804 kic_runner.go:114] Args: [docker exec --privileged bridge-648600 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:25:56.296490    4804 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa...
	I1210 07:25:58.538165    4804 cli_runner.go:164] Run: docker container inspect bridge-648600 --format={{.State.Status}}
	I1210 07:25:58.610699    4804 machine.go:94] provisionDockerMachine start ...
	I1210 07:25:58.614716    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:25:58.671691    4804 main.go:143] libmachine: Using SSH client type: native
	I1210 07:25:58.684691    4804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57145 <nil> <nil>}
	I1210 07:25:58.684691    4804 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:25:58.854993    4804 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-648600
	
	I1210 07:25:58.855026    4804 ubuntu.go:182] provisioning hostname "bridge-648600"
	I1210 07:25:58.858622    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:25:58.909867    4804 main.go:143] libmachine: Using SSH client type: native
	I1210 07:25:58.910872    4804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57145 <nil> <nil>}
	I1210 07:25:58.910872    4804 main.go:143] libmachine: About to run SSH command:
	sudo hostname bridge-648600 && echo "bridge-648600" | sudo tee /etc/hostname
	I1210 07:25:59.133481    4804 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-648600
	
	I1210 07:25:59.139277    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:25:59.193639    4804 main.go:143] libmachine: Using SSH client type: native
	I1210 07:25:59.194659    4804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57145 <nil> <nil>}
	I1210 07:25:59.194659    4804 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-648600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-648600/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-648600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:25:59.366643    4804 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:25:59.366643    4804 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 07:25:59.366643    4804 ubuntu.go:190] setting up certificates
	I1210 07:25:59.366643    4804 provision.go:84] configureAuth start
	I1210 07:25:59.372569    4804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-648600
	I1210 07:25:59.424305    4804 provision.go:143] copyHostCerts
	I1210 07:25:59.424305    4804 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 07:25:59.424305    4804 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 07:25:59.425310    4804 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 07:25:59.426315    4804 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 07:25:59.426315    4804 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 07:25:59.426315    4804 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 07:25:59.426315    4804 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 07:25:59.426315    4804 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 07:25:59.427309    4804 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 07:25:59.428305    4804 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.bridge-648600 san=[127.0.0.1 192.168.112.2 bridge-648600 localhost minikube]
	I1210 07:25:59.609649    4804 provision.go:177] copyRemoteCerts
	I1210 07:25:59.612966    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:25:59.616449    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:25:59.669935    4804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57145 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa Username:docker}
	I1210 07:25:59.791998    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:25:59.820476    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 07:25:59.846719    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:25:59.877046    4804 provision.go:87] duration metric: took 510.3942ms to configureAuth
	I1210 07:25:59.877077    4804 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:25:59.877619    4804 config.go:182] Loaded profile config "bridge-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:25:59.880641    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:25:59.942142    4804 main.go:143] libmachine: Using SSH client type: native
	I1210 07:25:59.942142    4804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57145 <nil> <nil>}
	I1210 07:25:59.942142    4804 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 07:26:00.118053    4804 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 07:26:00.118118    4804 ubuntu.go:71] root file system type: overlay
	I1210 07:26:00.118212    4804 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 07:26:00.123385    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:00.181410    4804 main.go:143] libmachine: Using SSH client type: native
	I1210 07:26:00.181982    4804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57145 <nil> <nil>}
	I1210 07:26:00.181982    4804 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 07:26:00.393612    4804 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 07:26:00.397347    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:00.457089    4804 main.go:143] libmachine: Using SSH client type: native
	I1210 07:26:00.457167    4804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57145 <nil> <nil>}
	I1210 07:26:00.457167    4804 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 07:26:01.934057    4804 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-10 07:26:00.376152452 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1210 07:26:01.934057    4804 machine.go:97] duration metric: took 3.3233058s to provisionDockerMachine
	I1210 07:26:01.934057    4804 client.go:176] duration metric: took 11.302605s to LocalClient.Create
	I1210 07:26:01.934057    4804 start.go:167] duration metric: took 11.3027041s to libmachine.API.Create "bridge-648600"
	I1210 07:26:01.934594    4804 start.go:293] postStartSetup for "bridge-648600" (driver="docker")
	I1210 07:26:01.934692    4804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:26:01.942235    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:26:01.945040    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:02.000544    4804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57145 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa Username:docker}
	I1210 07:26:02.145056    4804 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:26:02.153062    4804 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:26:02.153062    4804 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:26:02.153062    4804 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 07:26:02.154054    4804 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 07:26:02.154054    4804 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 07:26:02.160050    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:26:02.177064    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 07:26:02.217059    4804 start.go:296] duration metric: took 282.4039ms for postStartSetup
	I1210 07:26:02.224068    4804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-648600
	I1210 07:26:02.300072    4804 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-648600\config.json ...
	I1210 07:26:02.307063    4804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:26:02.311068    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:02.374070    4804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57145 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa Username:docker}
	I1210 07:26:02.523061    4804 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:26:02.533071    4804 start.go:128] duration metric: took 11.9087995s to createHost
	I1210 07:26:02.533071    4804 start.go:83] releasing machines lock for "bridge-648600", held for 11.9089951s
	I1210 07:26:02.538055    4804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-648600
	I1210 07:26:02.606067    4804 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 07:26:02.611065    4804 ssh_runner.go:195] Run: cat /version.json
	I1210 07:26:02.611065    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:02.615063    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:02.672078    4804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57145 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa Username:docker}
	I1210 07:26:02.673066    4804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57145 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa Username:docker}
	W1210 07:26:02.794076    4804 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 07:26:02.799071    4804 ssh_runner.go:195] Run: systemctl --version
	I1210 07:26:02.818066    4804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:26:02.829076    4804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:26:02.835123    4804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:26:02.893077    4804 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 07:26:02.893077    4804 start.go:496] detecting cgroup driver to use...
	I1210 07:26:02.893077    4804 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:26:02.894067    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1210 07:26:02.911072    4804 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 07:26:02.911072    4804 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 07:26:02.931084    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:26:02.958066    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:26:02.978070    4804 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:26:02.983075    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:26:03.006095    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:26:03.029063    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:26:03.051070    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:26:03.073080    4804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:26:03.101275    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:26:03.129493    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:26:03.156512    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:26:03.180504    4804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:26:03.204499    4804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:26:03.229498    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:26:03.450069    4804 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:26:03.635076    4804 start.go:496] detecting cgroup driver to use...
	I1210 07:26:03.635076    4804 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:26:03.641073    4804 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 07:26:03.669080    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:26:03.696079    4804 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:26:03.759984    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:26:03.783974    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:26:03.803590    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:26:03.837786    4804 ssh_runner.go:195] Run: which cri-dockerd
	I1210 07:26:03.848792    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 07:26:03.865545    4804 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 07:26:03.905470    4804 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 07:26:04.086477    4804 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 07:26:04.248470    4804 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 07:26:04.248470    4804 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 07:26:04.276479    4804 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 07:26:04.305474    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:26:04.461490    4804 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 07:26:07.100531   11224 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:26:07.100645   11224 kubeadm.go:319] 
	I1210 07:26:07.100914   11224 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:26:07.107830   11224 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 07:26:07.107830   11224 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:26:07.109416   11224 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:26:07.109416   11224 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_INET: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] OS: Linux
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:26:07.113996   11224 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:26:07.113996   11224 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:26:07.113996   11224 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:26:07.113996   11224 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:26:07.113996   11224 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:26:07.115992   11224 out.go:252]   - Generating certificates and keys ...
	I1210 07:26:07.115992   11224 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:26:07.117997   11224 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:26:07.117997   11224 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:26:07.117997   11224 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:26:07.117997   11224 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:26:06.143507    4804 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6819903s)
	I1210 07:26:06.148172    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:26:06.173866    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 07:26:06.199939    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:26:06.223738    4804 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 07:26:06.369886    4804 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 07:26:06.510578    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:26:06.651893    4804 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 07:26:06.680901    4804 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 07:26:06.708083    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:26:06.853347    4804 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 07:26:06.965850    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:26:06.985257    4804 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 07:26:06.989257    4804 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 07:26:06.996250    4804 start.go:564] Will wait 60s for crictl version
	I1210 07:26:07.000258    4804 ssh_runner.go:195] Run: which crictl
	I1210 07:26:07.012023    4804 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:26:07.058889    4804 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 07:26:07.063603    4804 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:26:07.115992    4804 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:26:07.121990   11224 out.go:252]   - Booting up control plane ...
	I1210 07:26:07.122992   11224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:26:07.122992   11224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:26:07.122992   11224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:26:07.122992   11224 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:26:07.122992   11224 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:26:07.122992   11224 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:26:07.123991   11224 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:26:07.123991   11224 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:26:07.123991   11224 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:26:07.123991   11224 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:26:07.123991   11224 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000660194s
	I1210 07:26:07.123991   11224 kubeadm.go:319] 
	I1210 07:26:07.123991   11224 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:26:07.123991   11224 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:26:07.124990   11224 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:26:07.124990   11224 kubeadm.go:319] 
	I1210 07:26:07.124990   11224 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:26:07.124990   11224 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:26:07.124990   11224 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:26:07.124990   11224 kubeadm.go:319] 
	I1210 07:26:07.124990   11224 kubeadm.go:403] duration metric: took 8m14.0562387s to StartCluster
	I1210 07:26:07.124990   11224 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:26:07.128999   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:26:07.189549   11224 cri.go:89] found id: ""
	I1210 07:26:07.189549   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.190547   11224 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:26:07.190547   11224 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:26:07.193548   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:26:07.244335   11224 cri.go:89] found id: ""
	I1210 07:26:07.244335   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.244335   11224 logs.go:284] No container was found matching "etcd"
	I1210 07:26:07.244335   11224 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:26:07.248555   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:26:07.295451   11224 cri.go:89] found id: ""
	I1210 07:26:07.295451   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.295451   11224 logs.go:284] No container was found matching "coredns"
	I1210 07:26:07.295451   11224 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:26:07.299449   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:26:07.346456   11224 cri.go:89] found id: ""
	I1210 07:26:07.346456   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.346456   11224 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:26:07.346456   11224 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:26:07.352449   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:26:07.400714   11224 cri.go:89] found id: ""
	I1210 07:26:07.400714   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.400714   11224 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:26:07.400714   11224 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:26:07.406617   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:26:07.469611   11224 cri.go:89] found id: ""
	I1210 07:26:07.469611   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.469611   11224 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:26:07.469611   11224 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:26:07.473612   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:26:07.521612   11224 cri.go:89] found id: ""
	I1210 07:26:07.521612   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.521612   11224 logs.go:284] No container was found matching "kindnet"
	I1210 07:26:07.521612   11224 logs.go:123] Gathering logs for Docker ...
	I1210 07:26:07.521612   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:26:07.551610   11224 logs.go:123] Gathering logs for container status ...
	I1210 07:26:07.552612   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:26:07.608708   11224 logs.go:123] Gathering logs for kubelet ...
	I1210 07:26:07.608708   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:26:07.689194   11224 logs.go:123] Gathering logs for dmesg ...
	I1210 07:26:07.689194   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:26:07.734619   11224 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:26:07.734619   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:26:07.823677   11224 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:26:07.814275   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.815474   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.816551   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.817524   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.818265   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:26:07.814275   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.815474   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.816551   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.817524   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.818265   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:26:07.823677   11224 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000660194s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:26:07.823677   11224 out.go:285] * 
	W1210 07:26:07.823677   11224 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000660194s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:26:07.823677   11224 out.go:285] * 
	W1210 07:26:07.825673   11224 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:26:07.830674   11224 out.go:203] 
	W1210 07:26:07.833685   11224 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000660194s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:26:07.833685   11224 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:26:07.833685   11224 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:26:07.837675   11224 out.go:203] 
	I1210 07:26:07.158545    4804 out.go:252] * Preparing Kubernetes v1.34.3 on Docker 29.1.2 ...
	I1210 07:26:07.162556    4804 cli_runner.go:164] Run: docker exec -t bridge-648600 dig +short host.docker.internal
	I1210 07:26:07.286451    4804 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 07:26:07.290450    4804 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 07:26:07.297449    4804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:26:07.316447    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:07.369456    4804 kubeadm.go:884] updating cluster {Name:bridge-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:bridge-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.112.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:26:07.369456    4804 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:26:07.373469    4804 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 07:26:07.407613    4804 docker.go:691] Got preloaded images: 
	I1210 07:26:07.407613    4804 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.3 wasn't preloaded
	I1210 07:26:07.407613    4804 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.3 registry.k8s.io/kube-controller-manager:v1.34.3 registry.k8s.io/kube-scheduler:v1.34.3 registry.k8s.io/kube-proxy:v1.34.3 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 07:26:07.417613    4804 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:26:07.421639    4804 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:26:07.425625    4804 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:26:07.426619    4804 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:26:07.430622    4804 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:26:07.430622    4804 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:26:07.433626    4804 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:26:07.436618    4804 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:26:07.440643    4804 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:26:07.440643    4804 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:26:07.443631    4804 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:26:07.444632    4804 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 07:26:07.448646    4804 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:26:07.451625    4804 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:26:07.453633    4804 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 07:26:07.459644    4804 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	W1210 07:26:07.487615    4804 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:26:07.536614    4804 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:26:07.583612    4804 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:26:07.642677    4804 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:26:07.694179    4804 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:26:07.748680    4804 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:26:07.796684    4804 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:26:07.845675    4804 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.12.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 07:26:07.958276    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:26:07.959534    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:26:07.973493    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:26:08.007424    4804 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.3" does not exist at hash "5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942" in container runtime
	I1210 07:26:08.007951    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:26:08.008033    4804 docker.go:338] Removing image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:26:08.010755    4804 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.3" does not exist at hash "aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c" in container runtime
	I1210 07:26:08.010812    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:26:08.010846    4804 docker.go:338] Removing image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:26:08.013606    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:26:08.015295    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:26:08.017825    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:26:08.021326    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 07:26:08.026350    4804 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.3" needs transfer: "registry.k8s.io/kube-proxy:v1.34.3" does not exist at hash "36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691" in container runtime
	I1210 07:26:08.026350    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:26:08.026350    4804 docker.go:338] Removing image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:26:08.029343    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 07:26:08.030346    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:26:08.069337    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:26:08.177347    4804 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.3" does not exist at hash "aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78" in container runtime
	I1210 07:26:08.178350    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:26:08.177347    4804 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:26:08.178350    4804 docker.go:338] Removing image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:26:08.178350    4804 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:26:08.181336    4804 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 07:26:08.181336    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:26:08.181336    4804 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:26:08.181336    4804 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 07:26:08.181336    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:26:08.181336    4804 docker.go:338] Removing image: registry.k8s.io/pause:3.10.1
	I1210 07:26:08.183336    4804 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:26:08.185357    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:26:08.185357    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:26:08.186351    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:26:08.189333    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1210 07:26:08.189333    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:26:08.191351    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:26:08.192350    4804 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1210 07:26:08.192350    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:26:08.192350    4804 docker.go:338] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:26:08.197332    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:26:08.202343    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:26:08.290330    4804 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:26:08.290330    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.3': No such file or directory
	I1210 07:26:08.290330    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.3': No such file or directory
	I1210 07:26:08.290330    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 --> /var/lib/minikube/images/kube-apiserver_v1.34.3 (27075584 bytes)
	I1210 07:26:08.290330    4804 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:26:08.290330    4804 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:26:08.290330    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 --> /var/lib/minikube/images/kube-controller-manager_v1.34.3 (22830080 bytes)
	I1210 07:26:08.290330    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.3': No such file or directory
	I1210 07:26:08.290330    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 --> /var/lib/minikube/images/kube-proxy_v1.34.3 (25966592 bytes)
	I1210 07:26:08.298360    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:26:08.298360    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:26:08.298360    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 07:26:08.377348    4804 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 07:26:08.377348    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:26:08.377348    4804 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:26:08.377348    4804 docker.go:338] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:26:08.383341    4804 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:26:08.386352    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:26:08.469344    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.3': No such file or directory
	I1210 07:26:08.469344    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 07:26:08.469344    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 --> /var/lib/minikube/images/kube-scheduler_v1.34.3 (17393664 bytes)
	I1210 07:26:08.470347    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 07:26:08.472335    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 07:26:08.472335    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 07:26:08.534347    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1210 07:26:08.534347    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1210 07:26:08.560343    4804 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:26:08.566342    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:26:08.690347    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 07:26:08.690347    4804 docker.go:305] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 07:26:08.690347    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1210 07:26:08.690347    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 07:26:08.953354    4804 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 from cache
	
	
	==> Docker <==
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653477207Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653491208Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653496809Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653502209Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653531612Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653569015Z" level=info msg="Initializing buildkit"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.846125896Z" level=info msg="Completed buildkit initialization"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.854786460Z" level=info msg="Daemon has completed initialization"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.855010880Z" level=info msg="API listen on [::]:2376"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.855019980Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.855177894Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 07:17:27 no-preload-099700 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 07:17:28 no-preload-099700 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Loaded network plugin cni"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 07:17:28 no-preload-099700 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 10 07:18:02 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:18:02Z" level=info msg="Stop pulling image registry.k8s.io/etcd:3.6.6-0: Status: Downloaded newer image for registry.k8s.io/etcd:3.6.6-0"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:26:12.795847   11364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:12.796885   11364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:12.798235   11364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:12.799629   11364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:12.801134   11364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 07:26] CPU: 0 PID: 442139 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000005] RIP: 0033:0x7fa4c8168b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7fa4c8168af6.
	[  +0.000002] RSP: 002b:00007ffcec5b9c60 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +0.959943] CPU: 3 PID: 442297 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f8a7efcdb20
	[  +0.000006] Code: Unable to access opcode bytes at RIP 0x7f8a7efcdaf6.
	[  +0.000001] RSP: 002b:00007fffca681070 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 07:26:12 up  2:54,  0 user,  load average: 6.03, 5.57, 4.80
	Linux no-preload-099700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:26:09 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:09 no-preload-099700 kubelet[11128]: E1210 07:26:09.954246   11128 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:26:09 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:26:09 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:26:10 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 10 07:26:10 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:10 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:10 no-preload-099700 kubelet[11205]: E1210 07:26:10.691238   11205 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:26:10 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:26:10 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:26:11 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 10 07:26:11 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:11 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:11 no-preload-099700 kubelet[11232]: E1210 07:26:11.451058   11232 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:26:11 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:26:11 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:26:12 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 10 07:26:12 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:12 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:12 no-preload-099700 kubelet[11260]: E1210 07:26:12.207486   11260 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:26:12 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:26:12 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:26:12 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 10 07:26:12 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:12 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-099700 -n no-preload-099700
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-099700 -n no-preload-099700: exit status 6 (626.8376ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:26:13.768286    4860 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-099700" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-099700" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-099700
helpers_test.go:244: (dbg) docker inspect no-preload-099700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11",
	        "Created": "2025-12-10T07:17:13.908925425Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 372361,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:17:16.221120749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/hostname",
	        "HostsPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/hosts",
	        "LogPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11-json.log",
	        "Name": "/no-preload-099700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-099700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-099700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-099700",
	                "Source": "/var/lib/docker/volumes/no-preload-099700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-099700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-099700",
	                "name.minikube.sigs.k8s.io": "no-preload-099700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "19d075be822285a6bc04718614fae0d1e6b527c2b7b973ed840dd03da78703c1",
	            "SandboxKey": "/var/run/docker/netns/19d075be8222",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56157"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56153"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56154"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56155"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56156"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-099700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "19fb5b7ebc44993ca33ebb33ab9b189e482cb385e465c509a613326e2c10eb7e",
	                    "EndpointID": "bd211c76c769a23696ddb9b2e4a3cd1f6c2388bff504ec060a8ffe809e64dcb5",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-099700",
	                        "a93123bad589"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-099700 -n no-preload-099700
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-099700 -n no-preload-099700: exit status 6 (621.4086ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:26:14.467205   10376 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-099700" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-099700 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-099700 logs -n 25: (1.0870577s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                     │          PROFILE          │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-648600 sudo systemctl cat kubelet --no-pager                                                      │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo journalctl -xeu kubelet --all --full --no-pager                                       │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /etc/kubernetes/kubelet.conf                                                      │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /var/lib/kubelet/config.yaml                                                      │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl status docker --all --full --no-pager                                       │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl cat docker --no-pager                                                       │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /etc/docker/daemon.json                                                           │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo docker system info                                                                    │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl status cri-docker --all --full --no-pager                                   │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl cat cri-docker --no-pager                                                   │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                              │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /usr/lib/systemd/system/cri-docker.service                                        │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cri-dockerd --version                                                                 │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl status containerd --all --full --no-pager                                   │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl cat containerd --no-pager                                                   │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /lib/systemd/system/containerd.service                                            │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo cat /etc/containerd/config.toml                                                       │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo containerd config dump                                                                │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo systemctl status crio --all --full --no-pager                                         │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │                     │
	│ ssh     │ -p flannel-648600 sudo systemctl cat crio --no-pager                                                         │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                               │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ ssh     │ -p flannel-648600 sudo crio config                                                                           │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ delete  │ -p flannel-648600                                                                                            │ flannel-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	│ start   │ -p bridge-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker │ bridge-648600             │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │                     │
	│ ssh     │ -p enable-default-cni-648600 pgrep -a kubelet                                                                │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:25 UTC │ 10 Dec 25 07:25 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:25:49
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:25:49.543159    4804 out.go:360] Setting OutFile to fd 1260 ...
	I1210 07:25:49.586332    4804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:25:49.586332    4804 out.go:374] Setting ErrFile to fd 812...
	I1210 07:25:49.586377    4804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:25:49.601444    4804 out.go:368] Setting JSON to false
	I1210 07:25:49.603301    4804 start.go:133] hostinfo: {"hostname":"minikube4","uptime":10481,"bootTime":1765341068,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 07:25:49.603301    4804 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 07:25:49.607247    4804 out.go:179] * [bridge-648600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 07:25:49.610906    4804 notify.go:221] Checking for updates...
	I1210 07:25:49.613490    4804 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:25:49.615618    4804 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:25:49.617459    4804 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 07:25:49.620105    4804 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:25:49.622698    4804 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1210 07:25:47.110970   10052 pod_ready.go:104] pod "coredns-66bc5c9577-snb42" is not "Ready", error: <nil>
	W1210 07:25:49.622010   10052 pod_ready.go:104] pod "coredns-66bc5c9577-snb42" is not "Ready", error: <nil>
	I1210 07:25:49.625061    4804 config.go:182] Loaded profile config "enable-default-cni-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:25:49.625872    4804 config.go:182] Loaded profile config "newest-cni-525200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:25:49.626037    4804 config.go:182] Loaded profile config "no-preload-099700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:25:49.626037    4804 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:25:49.756585    4804 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 07:25:49.760223    4804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:25:49.995247    4804 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:25:49.978486557 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:25:49.998261    4804 out.go:179] * Using the docker driver based on user configuration
	I1210 07:25:50.001264    4804 start.go:309] selected driver: docker
	I1210 07:25:50.002267    4804 start.go:927] validating driver "docker" against <nil>
	I1210 07:25:50.002267    4804 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:25:50.087841    4804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:25:50.326740    4804 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:25:50.304007932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:25:50.326740    4804 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:25:50.328404    4804 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:25:50.338396    4804 out.go:179] * Using Docker Desktop driver with root privileges
	I1210 07:25:50.340335    4804 cni.go:84] Creating CNI manager for "bridge"
	I1210 07:25:50.340335    4804 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 07:25:50.340335    4804 start.go:353] cluster config:
	{Name:bridge-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:bridge-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:25:50.343283    4804 out.go:179] * Starting "bridge-648600" primary control-plane node in "bridge-648600" cluster
	I1210 07:25:50.346532    4804 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 07:25:50.348744    4804 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:25:50.351187    4804 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:25:50.351187    4804 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 07:25:50.394442    4804 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:25:50.434159    4804 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:25:50.434159    4804 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:25:50.622000    4804 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:25:50.622000    4804 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-648600\config.json ...
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:25:50.622000    4804 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-648600\config.json: {Name:mkda6ce656f671ed6502f97ceabe139018dc3485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:25:50.622000    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:25:50.623233    4804 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:25:50.623233    4804 start.go:360] acquireMachinesLock for bridge-648600: {Name:mk22986727a0b030c8919e2ba8ce1cc03f255d27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:50.623828    4804 start.go:364] duration metric: took 594.9µs to acquireMachinesLock for "bridge-648600"
	I1210 07:25:50.624001    4804 start.go:93] Provisioning new machine with config: &{Name:bridge-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:bridge-648600 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:25:50.624086    4804 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:25:50.630473    4804 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:25:50.631176    4804 start.go:159] libmachine.API.Create for "bridge-648600" (driver="docker")
	I1210 07:25:50.631275    4804 client.go:173] LocalClient.Create starting
	I1210 07:25:50.631359    4804 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1210 07:25:50.631951    4804 main.go:143] libmachine: Decoding PEM data...
	I1210 07:25:50.631985    4804 main.go:143] libmachine: Parsing certificate...
	I1210 07:25:50.632047    4804 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1210 07:25:50.632047    4804 main.go:143] libmachine: Decoding PEM data...
	I1210 07:25:50.632047    4804 main.go:143] libmachine: Parsing certificate...
	I1210 07:25:50.637892    4804 cli_runner.go:164] Run: docker network inspect bridge-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:25:50.766370    4804 cli_runner.go:211] docker network inspect bridge-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:25:50.773371    4804 network_create.go:284] running [docker network inspect bridge-648600] to gather additional debugging logs...
	I1210 07:25:50.773371    4804 cli_runner.go:164] Run: docker network inspect bridge-648600
	W1210 07:25:50.940187    4804 cli_runner.go:211] docker network inspect bridge-648600 returned with exit code 1
	I1210 07:25:50.940187    4804 network_create.go:287] error running [docker network inspect bridge-648600]: docker network inspect bridge-648600: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network bridge-648600 not found
	I1210 07:25:50.940187    4804 network_create.go:289] output of [docker network inspect bridge-648600]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network bridge-648600 not found
	
	** /stderr **
	I1210 07:25:50.943184    4804 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:25:51.023198    4804 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.067123    4804 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.118311    4804 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.415711    4804 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.593726    4804 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.749810    4804 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.785598    4804 network.go:209] skipping subnet 192.168.103.0/24 that is reserved: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:25:51.819602    4804 network.go:206] using free private subnet 192.168.112.0/24: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cc7020}
	I1210 07:25:51.819602    4804 network_create.go:124] attempt to create docker network bridge-648600 192.168.112.0/24 with gateway 192.168.112.1 and MTU of 1500 ...
	I1210 07:25:51.824606    4804 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.112.0/24 --gateway=192.168.112.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-648600 bridge-648600
	I1210 07:25:52.457680    4804 network_create.go:108] docker network bridge-648600 192.168.112.0/24 created
	I1210 07:25:52.457680    4804 kic.go:121] calculated static IP "192.168.112.2" for the "bridge-648600" container
	I1210 07:25:52.468991    4804 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:25:52.567040    4804 cli_runner.go:164] Run: docker volume create bridge-648600 --label name.minikube.sigs.k8s.io=bridge-648600 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:25:52.654855    4804 oci.go:103] Successfully created a docker volume bridge-648600
	I1210 07:25:52.661290    4804 cli_runner.go:164] Run: docker run --rm --name bridge-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-648600 --entrypoint /usr/bin/test -v bridge-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 07:25:53.606607    4804 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.606650    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1210 07:25:53.606650    4804 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 2.9846033s
	I1210 07:25:53.606650    4804 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1210 07:25:53.611255    4804 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.611255    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1210 07:25:53.611255    4804 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 2.9892082s
	I1210 07:25:53.611255    4804 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1210 07:25:53.618257    4804 cache.go:107] acquiring lock: {Name:mk4347e5873954a8abe84535c3dbf0018de433f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.618257    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 exists
	I1210 07:25:53.618257    4804 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.34.3" took 2.9962103s
	I1210 07:25:53.618257    4804 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 succeeded
	I1210 07:25:53.622270    4804 cache.go:107] acquiring lock: {Name:mk6b392ee3857c8c549222e5d4bc0d459f0d6374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.622270    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 exists
	I1210 07:25:53.623277    4804 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.12.1" took 3.0012304s
	I1210 07:25:53.623277    4804 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 succeeded
	I1210 07:25:53.639040    4804 cache.go:107] acquiring lock: {Name:mk712909f8cebe825fb8a435a44c90c8d2ca7c32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.639270    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 exists
	I1210 07:25:53.639270    4804 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.34.3" took 3.0172233s
	I1210 07:25:53.639270    4804 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 succeeded
	I1210 07:25:53.654496    4804 cache.go:107] acquiring lock: {Name:mkc3ba12d2dc6e754bbb22b72e061ebdaf85fee6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.654560    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 exists
	I1210 07:25:53.654560    4804 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.34.3" took 3.0325133s
	I1210 07:25:53.654560    4804 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 succeeded
	I1210 07:25:53.657375    4804 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.657375    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1210 07:25:53.657375    4804 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.0353279s
	I1210 07:25:53.657375    4804 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1210 07:25:53.721903    4804 cache.go:107] acquiring lock: {Name:mk2c17fa70a505b9214cef4e32cab64efc0821c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:53.722919    4804 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 exists
	I1210 07:25:53.722919    4804 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.34.3" took 3.1008707s
	I1210 07:25:53.722919    4804 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 succeeded
	I1210 07:25:53.722919    4804 cache.go:87] Successfully saved all images to host disk.
	I1210 07:25:54.341687    4804 cli_runner.go:217] Completed: docker run --rm --name bridge-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-648600 --entrypoint /usr/bin/test -v bridge-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.6803432s)
	I1210 07:25:54.341687    4804 oci.go:107] Successfully prepared a docker volume bridge-648600
	I1210 07:25:54.341687    4804 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:25:54.345933    4804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	W1210 07:25:51.668965   10052 pod_ready.go:104] pod "coredns-66bc5c9577-snb42" is not "Ready", error: <nil>
	W1210 07:25:54.108193   10052 pod_ready.go:104] pod "coredns-66bc5c9577-snb42" is not "Ready", error: <nil>
	I1210 07:25:55.617983   10052 pod_ready.go:94] pod "coredns-66bc5c9577-snb42" is "Ready"
	I1210 07:25:55.617983   10052 pod_ready.go:86] duration metric: took 32.0203634s for pod "coredns-66bc5c9577-snb42" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.617983   10052 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z85xd" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.622647   10052 pod_ready.go:99] pod "coredns-66bc5c9577-z85xd" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-z85xd" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-z85xd" not found
	I1210 07:25:55.622692   10052 pod_ready.go:86] duration metric: took 4.7083ms for pod "coredns-66bc5c9577-z85xd" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.628956   10052 pod_ready.go:83] waiting for pod "etcd-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.641372   10052 pod_ready.go:94] pod "etcd-enable-default-cni-648600" is "Ready"
	I1210 07:25:55.641424   10052 pod_ready.go:86] duration metric: took 12.4205ms for pod "etcd-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.649373   10052 pod_ready.go:83] waiting for pod "kube-apiserver-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.660931   10052 pod_ready.go:94] pod "kube-apiserver-enable-default-cni-648600" is "Ready"
	I1210 07:25:55.660931   10052 pod_ready.go:86] duration metric: took 11.513ms for pod "kube-apiserver-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:55.665948   10052 pod_ready.go:83] waiting for pod "kube-controller-manager-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:56.004282   10052 pod_ready.go:94] pod "kube-controller-manager-enable-default-cni-648600" is "Ready"
	I1210 07:25:56.004282   10052 pod_ready.go:86] duration metric: took 338.3283ms for pod "kube-controller-manager-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:56.204210   10052 pod_ready.go:83] waiting for pod "kube-proxy-vbl22" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:56.604378   10052 pod_ready.go:94] pod "kube-proxy-vbl22" is "Ready"
	I1210 07:25:56.604904   10052 pod_ready.go:86] duration metric: took 400.6871ms for pod "kube-proxy-vbl22" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:56.803854   10052 pod_ready.go:83] waiting for pod "kube-scheduler-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:57.202693   10052 pod_ready.go:94] pod "kube-scheduler-enable-default-cni-648600" is "Ready"
	I1210 07:25:57.203218   10052 pod_ready.go:86] duration metric: took 399.2547ms for pod "kube-scheduler-enable-default-cni-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:25:57.203218   10052 pod_ready.go:40] duration metric: took 33.6115798s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:25:57.296715   10052 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 07:25:57.302628   10052 out.go:179] * Done! kubectl is now configured to use "enable-default-cni-648600" cluster and "default" namespace by default
	I1210 07:25:54.612936    4804 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:25:54.590365523 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:25:54.615934    4804 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:25:54.861212    4804 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-648600 --name bridge-648600 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-648600 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-648600 --network bridge-648600 --ip 192.168.112.2 --volume bridge-648600:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 07:25:55.596152    4804 cli_runner.go:164] Run: docker container inspect bridge-648600 --format={{.State.Running}}
	I1210 07:25:55.671931    4804 cli_runner.go:164] Run: docker container inspect bridge-648600 --format={{.State.Status}}
	I1210 07:25:55.726932    4804 cli_runner.go:164] Run: docker exec bridge-648600 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:25:55.835842    4804 oci.go:144] the created container "bridge-648600" has a running status.
	I1210 07:25:55.835842    4804 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa...
	I1210 07:25:55.990727    4804 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:25:56.069551    4804 cli_runner.go:164] Run: docker container inspect bridge-648600 --format={{.State.Status}}
	I1210 07:25:56.135549    4804 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:25:56.135549    4804 kic_runner.go:114] Args: [docker exec --privileged bridge-648600 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:25:56.296490    4804 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa...
	I1210 07:25:58.538165    4804 cli_runner.go:164] Run: docker container inspect bridge-648600 --format={{.State.Status}}
	I1210 07:25:58.610699    4804 machine.go:94] provisionDockerMachine start ...
	I1210 07:25:58.614716    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:25:58.671691    4804 main.go:143] libmachine: Using SSH client type: native
	I1210 07:25:58.684691    4804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57145 <nil> <nil>}
	I1210 07:25:58.684691    4804 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:25:58.854993    4804 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-648600
	
	I1210 07:25:58.855026    4804 ubuntu.go:182] provisioning hostname "bridge-648600"
	I1210 07:25:58.858622    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:25:58.909867    4804 main.go:143] libmachine: Using SSH client type: native
	I1210 07:25:58.910872    4804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57145 <nil> <nil>}
	I1210 07:25:58.910872    4804 main.go:143] libmachine: About to run SSH command:
	sudo hostname bridge-648600 && echo "bridge-648600" | sudo tee /etc/hostname
	I1210 07:25:59.133481    4804 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-648600
	
	I1210 07:25:59.139277    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:25:59.193639    4804 main.go:143] libmachine: Using SSH client type: native
	I1210 07:25:59.194659    4804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57145 <nil> <nil>}
	I1210 07:25:59.194659    4804 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-648600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-648600/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-648600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:25:59.366643    4804 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:25:59.366643    4804 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 07:25:59.366643    4804 ubuntu.go:190] setting up certificates
	I1210 07:25:59.366643    4804 provision.go:84] configureAuth start
	I1210 07:25:59.372569    4804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-648600
	I1210 07:25:59.424305    4804 provision.go:143] copyHostCerts
	I1210 07:25:59.424305    4804 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 07:25:59.424305    4804 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 07:25:59.425310    4804 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 07:25:59.426315    4804 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 07:25:59.426315    4804 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 07:25:59.426315    4804 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 07:25:59.426315    4804 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 07:25:59.426315    4804 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 07:25:59.427309    4804 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 07:25:59.428305    4804 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.bridge-648600 san=[127.0.0.1 192.168.112.2 bridge-648600 localhost minikube]
	I1210 07:25:59.609649    4804 provision.go:177] copyRemoteCerts
	I1210 07:25:59.612966    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:25:59.616449    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:25:59.669935    4804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57145 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa Username:docker}
	I1210 07:25:59.791998    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:25:59.820476    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 07:25:59.846719    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:25:59.877046    4804 provision.go:87] duration metric: took 510.3942ms to configureAuth
	I1210 07:25:59.877077    4804 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:25:59.877619    4804 config.go:182] Loaded profile config "bridge-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:25:59.880641    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:25:59.942142    4804 main.go:143] libmachine: Using SSH client type: native
	I1210 07:25:59.942142    4804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57145 <nil> <nil>}
	I1210 07:25:59.942142    4804 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 07:26:00.118053    4804 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 07:26:00.118118    4804 ubuntu.go:71] root file system type: overlay
	I1210 07:26:00.118212    4804 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 07:26:00.123385    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:00.181410    4804 main.go:143] libmachine: Using SSH client type: native
	I1210 07:26:00.181982    4804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57145 <nil> <nil>}
	I1210 07:26:00.181982    4804 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 07:26:00.393612    4804 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 07:26:00.397347    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:00.457089    4804 main.go:143] libmachine: Using SSH client type: native
	I1210 07:26:00.457167    4804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57145 <nil> <nil>}
	I1210 07:26:00.457167    4804 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 07:26:01.934057    4804 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-10 07:26:00.376152452 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1210 07:26:01.934057    4804 machine.go:97] duration metric: took 3.3233058s to provisionDockerMachine
	I1210 07:26:01.934057    4804 client.go:176] duration metric: took 11.302605s to LocalClient.Create
	I1210 07:26:01.934057    4804 start.go:167] duration metric: took 11.3027041s to libmachine.API.Create "bridge-648600"
	I1210 07:26:01.934594    4804 start.go:293] postStartSetup for "bridge-648600" (driver="docker")
	I1210 07:26:01.934692    4804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:26:01.942235    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:26:01.945040    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:02.000544    4804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57145 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa Username:docker}
	I1210 07:26:02.145056    4804 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:26:02.153062    4804 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:26:02.153062    4804 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:26:02.153062    4804 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 07:26:02.154054    4804 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 07:26:02.154054    4804 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 07:26:02.160050    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:26:02.177064    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 07:26:02.217059    4804 start.go:296] duration metric: took 282.4039ms for postStartSetup
	I1210 07:26:02.224068    4804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-648600
	I1210 07:26:02.300072    4804 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\bridge-648600\config.json ...
	I1210 07:26:02.307063    4804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:26:02.311068    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:02.374070    4804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57145 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa Username:docker}
	I1210 07:26:02.523061    4804 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:26:02.533071    4804 start.go:128] duration metric: took 11.9087995s to createHost
	I1210 07:26:02.533071    4804 start.go:83] releasing machines lock for "bridge-648600", held for 11.9089951s
	I1210 07:26:02.538055    4804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-648600
	I1210 07:26:02.606067    4804 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 07:26:02.611065    4804 ssh_runner.go:195] Run: cat /version.json
	I1210 07:26:02.611065    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:02.615063    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:02.672078    4804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57145 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa Username:docker}
	I1210 07:26:02.673066    4804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57145 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa Username:docker}
	W1210 07:26:02.794076    4804 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 07:26:02.799071    4804 ssh_runner.go:195] Run: systemctl --version
	I1210 07:26:02.818066    4804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:26:02.829076    4804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:26:02.835123    4804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:26:02.893077    4804 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 07:26:02.893077    4804 start.go:496] detecting cgroup driver to use...
	I1210 07:26:02.893077    4804 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:26:02.894067    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1210 07:26:02.911072    4804 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 07:26:02.911072    4804 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 07:26:02.931084    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:26:02.958066    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:26:02.978070    4804 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:26:02.983075    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:26:03.006095    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:26:03.029063    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:26:03.051070    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:26:03.073080    4804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:26:03.101275    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:26:03.129493    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:26:03.156512    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:26:03.180504    4804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:26:03.204499    4804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:26:03.229498    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:26:03.450069    4804 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:26:03.635076    4804 start.go:496] detecting cgroup driver to use...
	I1210 07:26:03.635076    4804 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:26:03.641073    4804 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 07:26:03.669080    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:26:03.696079    4804 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:26:03.759984    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:26:03.783974    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:26:03.803590    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:26:03.837786    4804 ssh_runner.go:195] Run: which cri-dockerd
	I1210 07:26:03.848792    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 07:26:03.865545    4804 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 07:26:03.905470    4804 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 07:26:04.086477    4804 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 07:26:04.248470    4804 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 07:26:04.248470    4804 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 07:26:04.276479    4804 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 07:26:04.305474    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:26:04.461490    4804 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 07:26:07.100531   11224 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:26:07.100645   11224 kubeadm.go:319] 
	I1210 07:26:07.100914   11224 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:26:07.107830   11224 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 07:26:07.107830   11224 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:26:07.109416   11224 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:26:07.109416   11224 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_INET: enabled
	I1210 07:26:07.110013   11224 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 07:26:07.110998   11224 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 07:26:07.112017   11224 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] OS: Linux
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:26:07.112993   11224 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:26:07.113996   11224 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:26:07.113996   11224 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:26:07.113996   11224 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:26:07.113996   11224 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:26:07.113996   11224 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:26:07.115992   11224 out.go:252]   - Generating certificates and keys ...
	I1210 07:26:07.115992   11224 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:26:07.116995   11224 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:26:07.117997   11224 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:26:07.117997   11224 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:26:07.117997   11224 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:26:07.117997   11224 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:26:07.117997   11224 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:26:06.143507    4804 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6819903s)
	I1210 07:26:06.148172    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:26:06.173866    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 07:26:06.199939    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:26:06.223738    4804 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 07:26:06.369886    4804 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 07:26:06.510578    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:26:06.651893    4804 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 07:26:06.680901    4804 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 07:26:06.708083    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:26:06.853347    4804 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 07:26:06.965850    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:26:06.985257    4804 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 07:26:06.989257    4804 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 07:26:06.996250    4804 start.go:564] Will wait 60s for crictl version
	I1210 07:26:07.000258    4804 ssh_runner.go:195] Run: which crictl
	I1210 07:26:07.012023    4804 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:26:07.058889    4804 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 07:26:07.063603    4804 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:26:07.115992    4804 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:26:07.121990   11224 out.go:252]   - Booting up control plane ...
	I1210 07:26:07.122992   11224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:26:07.122992   11224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:26:07.122992   11224 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:26:07.122992   11224 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:26:07.122992   11224 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:26:07.122992   11224 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:26:07.123991   11224 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:26:07.123991   11224 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:26:07.123991   11224 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:26:07.123991   11224 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:26:07.123991   11224 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000660194s
	I1210 07:26:07.123991   11224 kubeadm.go:319] 
	I1210 07:26:07.123991   11224 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:26:07.123991   11224 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:26:07.124990   11224 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:26:07.124990   11224 kubeadm.go:319] 
	I1210 07:26:07.124990   11224 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:26:07.124990   11224 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:26:07.124990   11224 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:26:07.124990   11224 kubeadm.go:319] 
	I1210 07:26:07.124990   11224 kubeadm.go:403] duration metric: took 8m14.0562387s to StartCluster
	I1210 07:26:07.124990   11224 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:26:07.128999   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:26:07.189549   11224 cri.go:89] found id: ""
	I1210 07:26:07.189549   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.190547   11224 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:26:07.190547   11224 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:26:07.193548   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:26:07.244335   11224 cri.go:89] found id: ""
	I1210 07:26:07.244335   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.244335   11224 logs.go:284] No container was found matching "etcd"
	I1210 07:26:07.244335   11224 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:26:07.248555   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:26:07.295451   11224 cri.go:89] found id: ""
	I1210 07:26:07.295451   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.295451   11224 logs.go:284] No container was found matching "coredns"
	I1210 07:26:07.295451   11224 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:26:07.299449   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:26:07.346456   11224 cri.go:89] found id: ""
	I1210 07:26:07.346456   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.346456   11224 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:26:07.346456   11224 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:26:07.352449   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:26:07.400714   11224 cri.go:89] found id: ""
	I1210 07:26:07.400714   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.400714   11224 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:26:07.400714   11224 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:26:07.406617   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:26:07.469611   11224 cri.go:89] found id: ""
	I1210 07:26:07.469611   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.469611   11224 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:26:07.469611   11224 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:26:07.473612   11224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:26:07.521612   11224 cri.go:89] found id: ""
	I1210 07:26:07.521612   11224 logs.go:282] 0 containers: []
	W1210 07:26:07.521612   11224 logs.go:284] No container was found matching "kindnet"
	I1210 07:26:07.521612   11224 logs.go:123] Gathering logs for Docker ...
	I1210 07:26:07.521612   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:26:07.551610   11224 logs.go:123] Gathering logs for container status ...
	I1210 07:26:07.552612   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:26:07.608708   11224 logs.go:123] Gathering logs for kubelet ...
	I1210 07:26:07.608708   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:26:07.689194   11224 logs.go:123] Gathering logs for dmesg ...
	I1210 07:26:07.689194   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:26:07.734619   11224 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:26:07.734619   11224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:26:07.823677   11224 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:26:07.814275   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.815474   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.816551   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.817524   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.818265   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:26:07.814275   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.815474   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.816551   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.817524   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:07.818265   11030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1210 07:26:07.823677   11224 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000660194s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:26:07.823677   11224 out.go:285] * 
	W1210 07:26:07.823677   11224 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000660194s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:26:07.823677   11224 out.go:285] * 
	W1210 07:26:07.825673   11224 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:26:07.830674   11224 out.go:203] 
	W1210 07:26:07.833685   11224 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000660194s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:26:07.833685   11224 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:26:07.833685   11224 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:26:07.837675   11224 out.go:203] 
	I1210 07:26:07.158545    4804 out.go:252] * Preparing Kubernetes v1.34.3 on Docker 29.1.2 ...
	I1210 07:26:07.162556    4804 cli_runner.go:164] Run: docker exec -t bridge-648600 dig +short host.docker.internal
	I1210 07:26:07.286451    4804 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 07:26:07.290450    4804 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 07:26:07.297449    4804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:26:07.316447    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:07.369456    4804 kubeadm.go:884] updating cluster {Name:bridge-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:bridge-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.112.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:26:07.369456    4804 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:26:07.373469    4804 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 07:26:07.407613    4804 docker.go:691] Got preloaded images: 
	I1210 07:26:07.407613    4804 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.3 wasn't preloaded
	I1210 07:26:07.407613    4804 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.3 registry.k8s.io/kube-controller-manager:v1.34.3 registry.k8s.io/kube-scheduler:v1.34.3 registry.k8s.io/kube-proxy:v1.34.3 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 07:26:07.417613    4804 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:26:07.421639    4804 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:26:07.425625    4804 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:26:07.426619    4804 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:26:07.430622    4804 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:26:07.430622    4804 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:26:07.433626    4804 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:26:07.436618    4804 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:26:07.440643    4804 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:26:07.440643    4804 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:26:07.443631    4804 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:26:07.444632    4804 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 07:26:07.448646    4804 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:26:07.451625    4804 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:26:07.453633    4804 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 07:26:07.459644    4804 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	W1210 07:26:07.487615    4804 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:26:07.536614    4804 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:26:07.583612    4804 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:26:07.642677    4804 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:26:07.694179    4804 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:26:07.748680    4804 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:26:07.796684    4804 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:26:07.845675    4804 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.12.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 07:26:07.958276    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:26:07.959534    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:26:07.973493    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:26:08.007424    4804 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.3" does not exist at hash "5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942" in container runtime
	I1210 07:26:08.007951    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:26:08.008033    4804 docker.go:338] Removing image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:26:08.010755    4804 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.3" does not exist at hash "aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c" in container runtime
	I1210 07:26:08.010812    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:26:08.010846    4804 docker.go:338] Removing image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:26:08.013606    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:26:08.015295    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:26:08.017825    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:26:08.021326    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 07:26:08.026350    4804 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.3" needs transfer: "registry.k8s.io/kube-proxy:v1.34.3" does not exist at hash "36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691" in container runtime
	I1210 07:26:08.026350    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:26:08.026350    4804 docker.go:338] Removing image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:26:08.029343    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 07:26:08.030346    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:26:08.069337    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:26:08.177347    4804 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.3" does not exist at hash "aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78" in container runtime
	I1210 07:26:08.178350    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:26:08.177347    4804 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:26:08.178350    4804 docker.go:338] Removing image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:26:08.178350    4804 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:26:08.181336    4804 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 07:26:08.181336    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:26:08.181336    4804 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:26:08.181336    4804 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 07:26:08.181336    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:26:08.181336    4804 docker.go:338] Removing image: registry.k8s.io/pause:3.10.1
	I1210 07:26:08.183336    4804 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:26:08.185357    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:26:08.185357    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:26:08.186351    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:26:08.189333    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1210 07:26:08.189333    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:26:08.191351    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:26:08.192350    4804 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1210 07:26:08.192350    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:26:08.192350    4804 docker.go:338] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:26:08.197332    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:26:08.202343    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:26:08.290330    4804 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:26:08.290330    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.3': No such file or directory
	I1210 07:26:08.290330    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.3': No such file or directory
	I1210 07:26:08.290330    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 --> /var/lib/minikube/images/kube-apiserver_v1.34.3 (27075584 bytes)
	I1210 07:26:08.290330    4804 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:26:08.290330    4804 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:26:08.290330    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 --> /var/lib/minikube/images/kube-controller-manager_v1.34.3 (22830080 bytes)
	I1210 07:26:08.290330    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.3': No such file or directory
	I1210 07:26:08.290330    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 --> /var/lib/minikube/images/kube-proxy_v1.34.3 (25966592 bytes)
	I1210 07:26:08.298360    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:26:08.298360    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:26:08.298360    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 07:26:08.377348    4804 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 07:26:08.377348    4804 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:26:08.377348    4804 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:26:08.377348    4804 docker.go:338] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:26:08.383341    4804 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:26:08.386352    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:26:08.469344    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.3': No such file or directory
	I1210 07:26:08.469344    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 07:26:08.469344    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 --> /var/lib/minikube/images/kube-scheduler_v1.34.3 (17393664 bytes)
	I1210 07:26:08.470347    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 07:26:08.472335    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 07:26:08.472335    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 07:26:08.534347    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1210 07:26:08.534347    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1210 07:26:08.560343    4804 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:26:08.566342    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:26:08.690347    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 07:26:08.690347    4804 docker.go:305] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 07:26:08.690347    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1210 07:26:08.690347    4804 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 07:26:08.953354    4804 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 from cache
	I1210 07:26:09.591351    4804 docker.go:305] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:26:09.591351    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1210 07:26:10.481085    4804 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I1210 07:26:10.481085    4804 docker.go:305] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:26:10.481085    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.34.3 | docker load"
	I1210 07:26:13.388877    4804 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.34.3 | docker load": (2.9077467s)
	I1210 07:26:13.388977    4804 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 from cache
	I1210 07:26:13.389017    4804 docker.go:305] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:26:13.389017    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.34.3 | docker load"
	
	
	==> Docker <==
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653477207Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653491208Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653496809Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653502209Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653531612Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653569015Z" level=info msg="Initializing buildkit"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.846125896Z" level=info msg="Completed buildkit initialization"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.854786460Z" level=info msg="Daemon has completed initialization"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.855010880Z" level=info msg="API listen on [::]:2376"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.855019980Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.855177894Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 07:17:27 no-preload-099700 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 07:17:28 no-preload-099700 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Loaded network plugin cni"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 07:17:28 no-preload-099700 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 10 07:18:02 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:18:02Z" level=info msg="Stop pulling image registry.k8s.io/etcd:3.6.6-0: Status: Downloaded newer image for registry.k8s.io/etcd:3.6.6-0"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:26:15.481049   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:15.481822   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:15.483854   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:15.484918   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:26:15.486029   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 07:26] CPU: 0 PID: 442139 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000005] RIP: 0033:0x7fa4c8168b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7fa4c8168af6.
	[  +0.000002] RSP: 002b:00007ffcec5b9c60 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +0.959943] CPU: 3 PID: 442297 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f8a7efcdb20
	[  +0.000006] Code: Unable to access opcode bytes at RIP 0x7f8a7efcdaf6.
	[  +0.000001] RSP: 002b:00007fffca681070 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 07:26:15 up  2:54,  0 user,  load average: 6.27, 5.63, 4.83
	Linux no-preload-099700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:26:12 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:26:12 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 10 07:26:12 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:12 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:12 no-preload-099700 kubelet[11382]: E1210 07:26:12.946791   11382 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:26:12 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:26:12 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:26:13 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 326.
	Dec 10 07:26:13 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:13 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:13 no-preload-099700 kubelet[11402]: E1210 07:26:13.707951   11402 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:26:13 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:26:13 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:26:14 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 327.
	Dec 10 07:26:14 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:14 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:14 no-preload-099700 kubelet[11433]: E1210 07:26:14.442712   11433 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:26:14 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:26:14 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:26:15 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 328.
	Dec 10 07:26:15 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:15 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:26:15 no-preload-099700 kubelet[11472]: E1210 07:26:15.189860   11472 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:26:15 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:26:15 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-099700 -n no-preload-099700
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-099700 -n no-preload-099700: exit status 6 (595.5707ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:26:16.384491    4288 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-099700" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-099700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (5.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (98.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-099700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1210 07:26:20.099206   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-144100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:26:39.166717   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-099700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m35.7590415s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_addons_e23971240287a88151a2b5edd52daaba3879ba4a_5.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-099700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-099700 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-099700 describe deploy/metrics-server -n kube-system: exit status 1 (97.7469ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-099700" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-099700 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-099700
helpers_test.go:244: (dbg) docker inspect no-preload-099700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11",
	        "Created": "2025-12-10T07:17:13.908925425Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 372361,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:17:16.221120749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/hostname",
	        "HostsPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/hosts",
	        "LogPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11-json.log",
	        "Name": "/no-preload-099700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-099700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-099700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-099700",
	                "Source": "/var/lib/docker/volumes/no-preload-099700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-099700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-099700",
	                "name.minikube.sigs.k8s.io": "no-preload-099700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "19d075be822285a6bc04718614fae0d1e6b527c2b7b973ed840dd03da78703c1",
	            "SandboxKey": "/var/run/docker/netns/19d075be8222",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56157"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56153"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56154"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56155"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56156"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-099700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "19fb5b7ebc44993ca33ebb33ab9b189e482cb385e465c509a613326e2c10eb7e",
	                    "EndpointID": "bd211c76c769a23696ddb9b2e4a3cd1f6c2388bff504ec060a8ffe809e64dcb5",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-099700",
	                        "a93123bad589"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-099700 -n no-preload-099700
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-099700 -n no-preload-099700: exit status 6 (622.2433ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:27:52.948147   11144 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-099700" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-099700 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-099700 logs -n 25: (1.1667037s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                   │          PROFILE          │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p enable-default-cni-648600 sudo journalctl -xeu kubelet --all --full --no-pager                                                       │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo cat /etc/kubernetes/kubelet.conf                                                                      │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo cat /var/lib/kubelet/config.yaml                                                                      │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo systemctl status docker --all --full --no-pager                                                       │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo systemctl cat docker --no-pager                                                                       │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo cat /etc/docker/daemon.json                                                                           │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo docker system info                                                                                    │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo systemctl status cri-docker --all --full --no-pager                                                   │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo systemctl cat cri-docker --no-pager                                                                   │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo cri-dockerd --version                                                                                 │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo systemctl status containerd --all --full --no-pager                                                   │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo systemctl cat containerd --no-pager                                                                   │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo cat /lib/systemd/system/containerd.service                                                            │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo cat /etc/containerd/config.toml                                                                       │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo containerd config dump                                                                                │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo systemctl status crio --all --full --no-pager                                                         │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │                     │
	│ ssh     │ -p enable-default-cni-648600 sudo systemctl cat crio --no-pager                                                                         │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ ssh     │ -p enable-default-cni-648600 sudo crio config                                                                                           │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ delete  │ -p enable-default-cni-648600                                                                                                            │ enable-default-cni-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │ 10 Dec 25 07:26 UTC │
	│ start   │ -p kubenet-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker               │ kubenet-648600            │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:26 UTC │                     │
	│ ssh     │ -p bridge-648600 pgrep -a kubelet                                                                                                       │ bridge-648600             │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:27 UTC │ 10 Dec 25 07:27 UTC │
	│ addons  │ enable metrics-server -p newest-cni-525200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ newest-cni-525200         │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:27 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:26:48
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:26:48.266493    8148 out.go:360] Setting OutFile to fd 1904 ...
	I1210 07:26:48.309472    8148 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:26:48.309472    8148 out.go:374] Setting ErrFile to fd 1140...
	I1210 07:26:48.309472    8148 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:26:48.324472    8148 out.go:368] Setting JSON to false
	I1210 07:26:48.327483    8148 start.go:133] hostinfo: {"hostname":"minikube4","uptime":10540,"bootTime":1765341068,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 07:26:48.327483    8148 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 07:26:48.337470    8148 out.go:179] * [kubenet-648600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 07:26:48.341482    8148 notify.go:221] Checking for updates...
	I1210 07:26:48.341482    8148 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:26:48.344471    8148 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:26:48.348475    8148 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 07:26:48.350471    8148 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:26:48.352481    8148 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:26:48.355481    8148 config.go:182] Loaded profile config "bridge-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:26:48.356479    8148 config.go:182] Loaded profile config "newest-cni-525200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:26:48.356479    8148 config.go:182] Loaded profile config "no-preload-099700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:26:48.356479    8148 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:26:48.469490    8148 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 07:26:48.472884    8148 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:26:48.701836    8148 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:26:48.684462647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:26:48.708834    8148 out.go:179] * Using the docker driver based on user configuration
	I1210 07:26:48.710831    8148 start.go:309] selected driver: docker
	I1210 07:26:48.710831    8148 start.go:927] validating driver "docker" against <nil>
	I1210 07:26:48.710831    8148 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:26:48.750214    8148 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:26:48.989910    8148 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:26:48.972100581 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:26:48.989910    8148 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:26:48.990914    8148 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:26:48.992900    8148 out.go:179] * Using Docker Desktop driver with root privileges
	I1210 07:26:48.994901    8148 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1210 07:26:48.994901    8148 start.go:353] cluster config:
	{Name:kubenet-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kubenet-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:26:48.997899    8148 out.go:179] * Starting "kubenet-648600" primary control-plane node in "kubenet-648600" cluster
	I1210 07:26:48.999898    8148 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 07:26:49.001905    8148 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:26:49.003899    8148 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:26:49.003899    8148 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 07:26:49.041924    8148 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:26:49.075903    8148 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:26:49.075903    8148 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:26:49.333654    8148 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:26:49.333654    8148 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\config.json ...
	I1210 07:26:49.334674    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:26:49.334674    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:26:49.334674    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:26:49.334674    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:26:49.334674    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:26:49.334674    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:26:49.334674    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:26:49.334788    8148 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\config.json: {Name:mkaac7fa5349378c0496ed588d277fbc123f31fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:26:49.334788    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:26:49.336061    8148 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:26:49.336061    8148 start.go:360] acquireMachinesLock for kubenet-648600: {Name:mk6a48ff53a7089496e004db762788b363661fb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:26:49.336061    8148 start.go:364] duration metric: took 0s to acquireMachinesLock for "kubenet-648600"
	I1210 07:26:49.336061    8148 start.go:93] Provisioning new machine with config: &{Name:kubenet-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kubenet-648600 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:26:49.336657    8148 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:26:49.340104    8148 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:26:49.340610    8148 start.go:159] libmachine.API.Create for "kubenet-648600" (driver="docker")
	I1210 07:26:49.340757    8148 client.go:173] LocalClient.Create starting
	I1210 07:26:49.340893    8148 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1210 07:26:49.341421    8148 main.go:143] libmachine: Decoding PEM data...
	I1210 07:26:49.341512    8148 main.go:143] libmachine: Parsing certificate...
	I1210 07:26:49.341558    8148 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1210 07:26:49.341558    8148 main.go:143] libmachine: Decoding PEM data...
	I1210 07:26:49.341558    8148 main.go:143] libmachine: Parsing certificate...
	I1210 07:26:49.348012    8148 cli_runner.go:164] Run: docker network inspect kubenet-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:26:49.459518    8148 cli_runner.go:211] docker network inspect kubenet-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:26:49.468219    8148 network_create.go:284] running [docker network inspect kubenet-648600] to gather additional debugging logs...
	I1210 07:26:49.468219    8148 cli_runner.go:164] Run: docker network inspect kubenet-648600
	W1210 07:26:49.683727    8148 cli_runner.go:211] docker network inspect kubenet-648600 returned with exit code 1
	I1210 07:26:49.683727    8148 network_create.go:287] error running [docker network inspect kubenet-648600]: docker network inspect kubenet-648600: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubenet-648600 not found
	I1210 07:26:49.683727    8148 network_create.go:289] output of [docker network inspect kubenet-648600]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubenet-648600 not found
	
	** /stderr **
	I1210 07:26:49.688732    8148 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:26:49.787719    8148 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:26:49.823805    8148 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:26:50.041720    8148 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018195f0}
	I1210 07:26:50.041720    8148 network_create.go:124] attempt to create docker network kubenet-648600 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1210 07:26:50.046595    8148 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-648600 kubenet-648600
	W1210 07:26:50.801474    8148 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-648600 kubenet-648600 returned with exit code 1
	W1210 07:26:50.801474    8148 network_create.go:149] failed to create docker network kubenet-648600 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-648600 kubenet-648600: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1210 07:26:50.801474    8148 network_create.go:116] failed to create docker network kubenet-648600 192.168.67.0/24, will retry: subnet is taken
	I1210 07:26:50.923130    8148 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:26:51.134239    8148 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001de23f0}
	I1210 07:26:51.134239    8148 network_create.go:124] attempt to create docker network kubenet-648600 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 07:26:51.139999    8148 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-648600 kubenet-648600
	I1210 07:26:51.363502    8148 network_create.go:108] docker network kubenet-648600 192.168.76.0/24 created
	I1210 07:26:51.363502    8148 kic.go:121] calculated static IP "192.168.76.2" for the "kubenet-648600" container
	I1210 07:26:51.374867    8148 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:26:51.486602    8148 cli_runner.go:164] Run: docker volume create kubenet-648600 --label name.minikube.sigs.k8s.io=kubenet-648600 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:26:51.579595    8148 oci.go:103] Successfully created a docker volume kubenet-648600
	I1210 07:26:51.586596    8148 cli_runner.go:164] Run: docker run --rm --name kubenet-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-648600 --entrypoint /usr/bin/test -v kubenet-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 07:26:52.364576    8148 cache.go:107] acquiring lock: {Name:mk6b392ee3857c8c549222e5d4bc0d459f0d6374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:26:52.365577    8148 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 exists
	I1210 07:26:52.365577    8148 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.12.1" took 3.0307412s
	I1210 07:26:52.365577    8148 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 succeeded
	I1210 07:26:52.365577    8148 cache.go:107] acquiring lock: {Name:mk712909f8cebe825fb8a435a44c90c8d2ca7c32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:26:52.365577    8148 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 exists
	I1210 07:26:52.366586    8148 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.34.3" took 3.0305751s
	I1210 07:26:52.366586    8148 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 succeeded
	I1210 07:26:52.366586    8148 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:26:52.367587    8148 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1210 07:26:52.367587    8148 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.0328652s
	I1210 07:26:52.367587    8148 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1210 07:26:52.413544    8148 cache.go:107] acquiring lock: {Name:mk2c17fa70a505b9214cef4e32cab64efc0821c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:26:52.413544    8148 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 exists
	I1210 07:26:52.413544    8148 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.34.3" took 3.0788213s
	I1210 07:26:52.413544    8148 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 succeeded
	I1210 07:26:52.427545    8148 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:26:52.428562    8148 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1210 07:26:52.428562    8148 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.0935591s
	I1210 07:26:52.428562    8148 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1210 07:26:52.449539    8148 cache.go:107] acquiring lock: {Name:mkc3ba12d2dc6e754bbb22b72e061ebdaf85fee6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:26:52.449539    8148 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 exists
	I1210 07:26:52.449539    8148 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.34.3" took 3.1145826s
	I1210 07:26:52.449539    8148 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 succeeded
	I1210 07:26:52.487204    8148 cache.go:107] acquiring lock: {Name:mk4347e5873954a8abe84535c3dbf0018de433f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:26:52.487204    8148 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 exists
	I1210 07:26:52.487204    8148 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.34.3" took 3.1522478s
	I1210 07:26:52.487204    8148 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 succeeded
	I1210 07:26:52.488221    8148 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:26:52.488221    8148 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1210 07:26:52.488221    8148 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.153497s
	I1210 07:26:52.488221    8148 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1210 07:26:52.488221    8148 cache.go:87] Successfully saved all images to host disk.
	I1210 07:26:53.284247    8148 cli_runner.go:217] Completed: docker run --rm --name kubenet-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-648600 --entrypoint /usr/bin/test -v kubenet-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.6976247s)
	I1210 07:26:53.284247    8148 oci.go:107] Successfully prepared a docker volume kubenet-648600
	I1210 07:26:53.284247    8148 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:26:53.288253    8148 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:26:53.801115    4804 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1210 07:26:53.801115    4804 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:26:53.801115    4804 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:26:53.801115    4804 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:26:53.802154    4804 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:26:53.802154    4804 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:26:53.805187    4804 out.go:252]   - Generating certificates and keys ...
	I1210 07:26:53.805187    4804 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:26:53.805727    4804 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:26:53.806128    4804 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:26:53.806409    4804 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:26:53.806657    4804 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:26:53.806958    4804 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:26:53.807223    4804 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:26:53.807689    4804 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [bridge-648600 localhost] and IPs [192.168.112.2 127.0.0.1 ::1]
	I1210 07:26:53.807736    4804 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:26:53.807736    4804 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [bridge-648600 localhost] and IPs [192.168.112.2 127.0.0.1 ::1]
	I1210 07:26:53.807736    4804 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:26:53.808275    4804 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:26:53.808492    4804 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:26:53.808659    4804 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:26:53.808757    4804 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:26:53.808852    4804 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:26:53.808900    4804 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:26:53.808900    4804 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:26:53.808900    4804 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:26:53.809439    4804 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:26:53.809681    4804 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:26:53.811927    4804 out.go:252]   - Booting up control plane ...
	I1210 07:26:53.812166    4804 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:26:53.812359    4804 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:26:53.812549    4804 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:26:53.812864    4804 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:26:53.812901    4804 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:26:53.812901    4804 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:26:53.812901    4804 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:26:53.813434    4804 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:26:53.813781    4804 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:26:53.814076    4804 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:26:53.814242    4804 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 547.278359ms
	I1210 07:26:53.814462    4804 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 07:26:53.814642    4804 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.112.2:8443/livez
	I1210 07:26:53.814807    4804 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 07:26:53.815044    4804 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 07:26:53.815084    4804 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 11.930253219s
	I1210 07:26:53.815084    4804 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 12.602564259s
	I1210 07:26:53.815084    4804 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 17.003641444s
	I1210 07:26:53.815621    4804 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 07:26:53.816061    4804 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 07:26:53.816185    4804 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 07:26:53.816693    4804 kubeadm.go:319] [mark-control-plane] Marking the node bridge-648600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 07:26:53.816805    4804 kubeadm.go:319] [bootstrap-token] Using token: x3nlvh.opxvhtc30zotsvgx
	I1210 07:26:53.819383    4804 out.go:252]   - Configuring RBAC rules ...
	I1210 07:26:53.819553    4804 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 07:26:53.819780    4804 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 07:26:53.819826    4804 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 07:26:53.819826    4804 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 07:26:53.820458    4804 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 07:26:53.820458    4804 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 07:26:53.820458    4804 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 07:26:53.821076    4804 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 07:26:53.821076    4804 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 07:26:53.821076    4804 kubeadm.go:319] 
	I1210 07:26:53.821076    4804 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 07:26:53.821076    4804 kubeadm.go:319] 
	I1210 07:26:53.821076    4804 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 07:26:53.821076    4804 kubeadm.go:319] 
	I1210 07:26:53.821076    4804 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 07:26:53.821649    4804 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 07:26:53.821695    4804 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 07:26:53.821695    4804 kubeadm.go:319] 
	I1210 07:26:53.821695    4804 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 07:26:53.821695    4804 kubeadm.go:319] 
	I1210 07:26:53.821695    4804 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 07:26:53.821695    4804 kubeadm.go:319] 
	I1210 07:26:53.821695    4804 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 07:26:53.822281    4804 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 07:26:53.822281    4804 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 07:26:53.822281    4804 kubeadm.go:319] 
	I1210 07:26:53.822281    4804 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 07:26:53.822281    4804 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 07:26:53.822281    4804 kubeadm.go:319] 
	I1210 07:26:53.822880    4804 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token x3nlvh.opxvhtc30zotsvgx \
	I1210 07:26:53.822989    4804 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2501b4ef72a9792bee16677b10e6bb41bd6980e44395cdcd843b1bcd1dba3c3b \
	I1210 07:26:53.822989    4804 kubeadm.go:319] 	--control-plane 
	I1210 07:26:53.822989    4804 kubeadm.go:319] 
	I1210 07:26:53.822989    4804 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 07:26:53.822989    4804 kubeadm.go:319] 
	I1210 07:26:53.822989    4804 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token x3nlvh.opxvhtc30zotsvgx \
	I1210 07:26:53.822989    4804 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2501b4ef72a9792bee16677b10e6bb41bd6980e44395cdcd843b1bcd1dba3c3b 
	I1210 07:26:53.822989    4804 cni.go:84] Creating CNI manager for "bridge"
	I1210 07:26:53.825853    4804 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 07:26:53.832854    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 07:26:53.880596    4804 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 07:26:53.976368    4804 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 07:26:53.985234    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:53.986738    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-648600 minikube.k8s.io/updated_at=2025_12_10T07_26_53_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=bridge-648600 minikube.k8s.io/primary=true
	I1210 07:26:54.009995    4804 ops.go:34] apiserver oom_adj: -16
	I1210 07:26:54.190378    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:53.550095    8148 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:26:53.530757229 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:26:53.553467    8148 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:26:53.815828    8148 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-648600 --name kubenet-648600 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-648600 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-648600 --network kubenet-648600 --ip 192.168.76.2 --volume kubenet-648600:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 07:26:54.519455    8148 cli_runner.go:164] Run: docker container inspect kubenet-648600 --format={{.State.Running}}
	I1210 07:26:54.581248    8148 cli_runner.go:164] Run: docker container inspect kubenet-648600 --format={{.State.Status}}
	I1210 07:26:54.641935    8148 cli_runner.go:164] Run: docker exec kubenet-648600 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:26:54.756925    8148 oci.go:144] the created container "kubenet-648600" has a running status.
	I1210 07:26:54.756925    8148 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-648600\id_rsa...
	I1210 07:26:54.811927    8148 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-648600\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:26:54.884927    8148 cli_runner.go:164] Run: docker container inspect kubenet-648600 --format={{.State.Status}}
	I1210 07:26:54.944940    8148 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:26:54.945950    8148 kic_runner.go:114] Args: [docker exec --privileged kubenet-648600 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:26:55.067842    8148 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-648600\id_rsa...
	I1210 07:26:57.304431    8148 cli_runner.go:164] Run: docker container inspect kubenet-648600 --format={{.State.Status}}
	I1210 07:26:57.356002    8148 machine.go:94] provisionDockerMachine start ...
	I1210 07:26:57.358999    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:26:57.409999    8148 main.go:143] libmachine: Using SSH client type: native
	I1210 07:26:57.423964    8148 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57306 <nil> <nil>}
	I1210 07:26:57.423964    8148 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:26:57.601161    8148 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-648600
	
	I1210 07:26:57.601161    8148 ubuntu.go:182] provisioning hostname "kubenet-648600"
	I1210 07:26:57.603876    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:26:57.663309    8148 main.go:143] libmachine: Using SSH client type: native
	I1210 07:26:57.663799    8148 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57306 <nil> <nil>}
	I1210 07:26:57.663874    8148 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubenet-648600 && echo "kubenet-648600" | sudo tee /etc/hostname
	I1210 07:26:57.856555    8148 main.go:143] libmachine: SSH cmd err, output: <nil>: kubenet-648600
	
	I1210 07:26:57.860405    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:26:57.919561    8148 main.go:143] libmachine: Using SSH client type: native
	I1210 07:26:57.919561    8148 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57306 <nil> <nil>}
	I1210 07:26:57.919561    8148 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-648600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-648600/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-648600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:26:58.104063    8148 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:26:58.104063    8148 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 07:26:58.104121    8148 ubuntu.go:190] setting up certificates
	I1210 07:26:58.104162    8148 provision.go:84] configureAuth start
	I1210 07:26:58.107864    8148 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-648600
	I1210 07:26:58.171039    8148 provision.go:143] copyHostCerts
	I1210 07:26:58.171691    8148 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 07:26:58.171691    8148 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 07:26:58.171691    8148 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 07:26:58.172425    8148 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 07:26:58.172949    8148 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 07:26:58.173167    8148 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 07:26:58.173800    8148 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 07:26:58.173800    8148 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 07:26:58.173800    8148 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 07:26:58.174689    8148 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubenet-648600 san=[127.0.0.1 192.168.76.2 kubenet-648600 localhost minikube]
	I1210 07:26:58.255032    8148 provision.go:177] copyRemoteCerts
	I1210 07:26:58.258056    8148 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:26:58.261058    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:26:54.691938    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:55.191139    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:55.690239    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:56.191849    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:56.688904    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:57.191206    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:57.689800    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:58.189890    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:58.691249    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:26:59.274789    4804 kubeadm.go:1114] duration metric: took 5.2983382s to wait for elevateKubeSystemPrivileges
	I1210 07:26:59.274789    4804 kubeadm.go:403] duration metric: took 30.2725617s to StartCluster
	I1210 07:26:59.274789    4804 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:26:59.274789    4804 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:26:59.276563    4804 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:26:59.276767    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 07:26:59.276767    4804 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.112.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:26:59.276767    4804 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:26:59.276767    4804 addons.go:70] Setting storage-provisioner=true in profile "bridge-648600"
	I1210 07:26:59.276767    4804 addons.go:239] Setting addon storage-provisioner=true in "bridge-648600"
	I1210 07:26:59.276767    4804 addons.go:70] Setting default-storageclass=true in profile "bridge-648600"
	I1210 07:26:59.276767    4804 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-648600"
	I1210 07:26:59.276767    4804 host.go:66] Checking if "bridge-648600" exists ...
	I1210 07:26:59.276767    4804 config.go:182] Loaded profile config "bridge-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:26:59.286058    4804 out.go:179] * Verifying Kubernetes components...
	I1210 07:26:59.287899    4804 cli_runner.go:164] Run: docker container inspect bridge-648600 --format={{.State.Status}}
	I1210 07:26:59.287948    4804 cli_runner.go:164] Run: docker container inspect bridge-648600 --format={{.State.Status}}
	I1210 07:26:59.293641    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:26:59.356090    4804 addons.go:239] Setting addon default-storageclass=true in "bridge-648600"
	I1210 07:26:59.356090    4804 host.go:66] Checking if "bridge-648600" exists ...
	I1210 07:26:59.360086    4804 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:26:59.362082    4804 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:26:59.362082    4804 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:26:59.363112    4804 cli_runner.go:164] Run: docker container inspect bridge-648600 --format={{.State.Status}}
	I1210 07:26:59.366098    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:59.424089    4804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57145 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa Username:docker}
	I1210 07:26:59.438092    4804 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:26:59.438092    4804 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:26:59.441091    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:26:59.499085    4804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57145 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\bridge-648600\id_rsa Username:docker}
	I1210 07:26:59.767231    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 07:26:59.803181    4804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:26:59.973487    4804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:26:59.995416    4804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:27:00.469947    4804 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1210 07:27:00.475439    4804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" bridge-648600
	I1210 07:27:00.538068    4804 node_ready.go:35] waiting up to 15m0s for node "bridge-648600" to be "Ready" ...
	I1210 07:27:00.566207    4804 node_ready.go:49] node "bridge-648600" is "Ready"
	I1210 07:27:00.566420    4804 node_ready.go:38] duration metric: took 28.3072ms for node "bridge-648600" to be "Ready" ...
	I1210 07:27:00.566493    4804 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:27:00.573540    4804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:27:01.065842    4804 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-648600" context rescaled to 1 replicas
	I1210 07:27:01.467730    4804 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.4942194s)
	I1210 07:27:01.467730    4804 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.472291s)
	I1210 07:27:01.467730    4804 api_server.go:72] duration metric: took 2.1909285s to wait for apiserver process to appear ...
	I1210 07:27:01.467730    4804 api_server.go:88] waiting for apiserver healthz status ...
	I1210 07:27:01.468738    4804 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57149/healthz ...
	I1210 07:27:01.487740    4804 api_server.go:279] https://127.0.0.1:57149/healthz returned 200:
	ok
	I1210 07:27:01.490750    4804 api_server.go:141] control plane version: v1.34.3
	I1210 07:27:01.490750    4804 api_server.go:131] duration metric: took 22.0114ms to wait for apiserver health ...
	I1210 07:27:01.490750    4804 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 07:27:01.495741    4804 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1210 07:26:58.316687    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57306 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-648600\id_rsa Username:docker}
	I1210 07:26:58.448850    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:26:58.481116    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I1210 07:26:58.509443    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:26:58.539652    8148 provision.go:87] duration metric: took 435.4835ms to configureAuth
	I1210 07:26:58.539652    8148 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:26:58.540261    8148 config.go:182] Loaded profile config "kubenet-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:26:58.543649    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:26:58.601090    8148 main.go:143] libmachine: Using SSH client type: native
	I1210 07:26:58.601090    8148 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57306 <nil> <nil>}
	I1210 07:26:58.601090    8148 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 07:26:58.781484    8148 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 07:26:58.781484    8148 ubuntu.go:71] root file system type: overlay
	I1210 07:26:58.781484    8148 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 07:26:58.786185    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:26:58.844618    8148 main.go:143] libmachine: Using SSH client type: native
	I1210 07:26:58.844618    8148 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57306 <nil> <nil>}
	I1210 07:26:58.844618    8148 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 07:26:59.066374    8148 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 07:26:59.073011    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:26:59.134607    8148 main.go:143] libmachine: Using SSH client type: native
	I1210 07:26:59.135262    8148 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57306 <nil> <nil>}
	I1210 07:26:59.135262    8148 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 07:27:00.643707    8148 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-10 07:26:59.058519910 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1210 07:27:00.643707    8148 machine.go:97] duration metric: took 3.2876533s to provisionDockerMachine
	I1210 07:27:00.643707    8148 client.go:176] duration metric: took 11.3027721s to LocalClient.Create
	I1210 07:27:00.643707    8148 start.go:167] duration metric: took 11.3029188s to libmachine.API.Create "kubenet-648600"
	I1210 07:27:00.643707    8148 start.go:293] postStartSetup for "kubenet-648600" (driver="docker")
	I1210 07:27:00.643707    8148 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:27:00.649827    8148 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:27:00.653512    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:27:00.716857    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57306 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-648600\id_rsa Username:docker}
	I1210 07:27:00.852079    8148 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:27:00.860820    8148 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:27:00.860820    8148 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:27:00.860820    8148 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 07:27:00.860820    8148 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 07:27:00.860820    8148 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 07:27:00.869338    8148 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:27:00.884445    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 07:27:00.918715    8148 start.go:296] duration metric: took 274.9457ms for postStartSetup
	I1210 07:27:00.925503    8148 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-648600
	I1210 07:27:00.987935    8148 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\config.json ...
	I1210 07:27:00.993933    8148 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:27:00.996277    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:27:01.051137    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57306 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-648600\id_rsa Username:docker}
	I1210 07:27:01.182215    8148 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:27:01.194199    8148 start.go:128] duration metric: took 11.8573561s to createHost
	I1210 07:27:01.194242    8148 start.go:83] releasing machines lock for "kubenet-648600", held for 11.8579948s
	I1210 07:27:01.197930    8148 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-648600
	I1210 07:27:01.248755    8148 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 07:27:01.252745    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:27:01.252745    8148 ssh_runner.go:195] Run: cat /version.json
	I1210 07:27:01.256527    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:27:01.318008    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57306 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-648600\id_rsa Username:docker}
	I1210 07:27:01.319032    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57306 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-648600\id_rsa Username:docker}
	W1210 07:27:01.443737    8148 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 07:27:01.447734    8148 ssh_runner.go:195] Run: systemctl --version
	I1210 07:27:01.462734    8148 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:27:01.474736    8148 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:27:01.478735    8148 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:27:01.547557    8148 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 07:27:01.547557    8148 start.go:496] detecting cgroup driver to use...
	I1210 07:27:01.547557    8148 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:27:01.547557    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1210 07:27:01.555870    8148 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 07:27:01.555933    8148 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 07:27:01.583378    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:27:01.603382    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:27:01.624306    8148 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:27:01.629592    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:27:01.650610    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:27:01.670408    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:27:01.693106    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:27:01.714490    8148 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:27:01.733417    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:27:01.754000    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:27:01.776716    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:27:01.809438    8148 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:27:01.826200    8148 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:27:01.841594    8148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:27:02.005837    8148 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:27:02.170025    8148 start.go:496] detecting cgroup driver to use...
	I1210 07:27:02.170025    8148 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:27:02.175824    8148 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 07:27:02.202812    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:27:02.227869    8148 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:27:02.283513    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:27:02.307088    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:27:02.329964    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:27:02.360310    8148 ssh_runner.go:195] Run: which cri-dockerd
	I1210 07:27:02.373353    8148 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 07:27:02.387541    8148 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (196 bytes)
	I1210 07:27:02.414437    8148 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 07:27:02.553484    8148 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 07:27:02.664300    8148 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 07:27:02.664561    8148 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 07:27:02.696919    8148 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 07:27:02.720791    8148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:27:02.872487    8148 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 07:27:01.498736    4804 addons.go:530] duration metric: took 2.2219344s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 07:27:01.510837    4804 system_pods.go:59] 8 kube-system pods found
	I1210 07:27:01.510947    4804 system_pods.go:61] "coredns-66bc5c9577-drdxd" [cf0185d0-fabc-4045-82ae-1985176ac7d2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:01.510947    4804 system_pods.go:61] "coredns-66bc5c9577-w2ff8" [d3ef0b65-8051-4f98-8a05-4b512fed48f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:01.510947    4804 system_pods.go:61] "etcd-bridge-648600" [d1be9de0-1547-4bea-96a2-6c6f18b0b5c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:27:01.510947    4804 system_pods.go:61] "kube-apiserver-bridge-648600" [49a7a37c-ed2d-42f8-9ebd-8e0fc25dfad8] Running
	I1210 07:27:01.510947    4804 system_pods.go:61] "kube-controller-manager-bridge-648600" [f0f8b824-6039-4158-99c3-0fbbb537fb95] Running
	I1210 07:27:01.511017    4804 system_pods.go:61] "kube-proxy-rvxdz" [0b48ba7e-6574-4dbf-9a6f-0cf235bd2b59] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:27:01.511061    4804 system_pods.go:61] "kube-scheduler-bridge-648600" [2fa395e0-512f-44f2-a6e6-d31024c11c77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:27:01.511061    4804 system_pods.go:61] "storage-provisioner" [f3e50e94-3f5d-4c96-9aec-ba2e61f60ec0] Pending
	I1210 07:27:01.511061    4804 system_pods.go:74] duration metric: took 20.3103ms to wait for pod list to return data ...
	I1210 07:27:01.511114    4804 default_sa.go:34] waiting for default service account to be created ...
	I1210 07:27:01.515919    4804 default_sa.go:45] found service account: "default"
	I1210 07:27:01.515919    4804 default_sa.go:55] duration metric: took 4.8051ms for default service account to be created ...
	I1210 07:27:01.515919    4804 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 07:27:01.529893    4804 system_pods.go:86] 8 kube-system pods found
	I1210 07:27:01.529926    4804 system_pods.go:89] "coredns-66bc5c9577-drdxd" [cf0185d0-fabc-4045-82ae-1985176ac7d2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:01.529926    4804 system_pods.go:89] "coredns-66bc5c9577-w2ff8" [d3ef0b65-8051-4f98-8a05-4b512fed48f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:01.529962    4804 system_pods.go:89] "etcd-bridge-648600" [d1be9de0-1547-4bea-96a2-6c6f18b0b5c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:27:01.529962    4804 system_pods.go:89] "kube-apiserver-bridge-648600" [49a7a37c-ed2d-42f8-9ebd-8e0fc25dfad8] Running
	I1210 07:27:01.529982    4804 system_pods.go:89] "kube-controller-manager-bridge-648600" [f0f8b824-6039-4158-99c3-0fbbb537fb95] Running
	I1210 07:27:01.529982    4804 system_pods.go:89] "kube-proxy-rvxdz" [0b48ba7e-6574-4dbf-9a6f-0cf235bd2b59] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:27:01.529982    4804 system_pods.go:89] "kube-scheduler-bridge-648600" [2fa395e0-512f-44f2-a6e6-d31024c11c77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:27:01.529982    4804 system_pods.go:89] "storage-provisioner" [f3e50e94-3f5d-4c96-9aec-ba2e61f60ec0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:27:01.530046    4804 retry.go:31] will retry after 254.830899ms: missing components: kube-dns, kube-proxy
	I1210 07:27:01.793715    4804 system_pods.go:86] 8 kube-system pods found
	I1210 07:27:01.793715    4804 system_pods.go:89] "coredns-66bc5c9577-drdxd" [cf0185d0-fabc-4045-82ae-1985176ac7d2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:01.793715    4804 system_pods.go:89] "coredns-66bc5c9577-w2ff8" [d3ef0b65-8051-4f98-8a05-4b512fed48f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:01.793715    4804 system_pods.go:89] "etcd-bridge-648600" [d1be9de0-1547-4bea-96a2-6c6f18b0b5c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:27:01.793715    4804 system_pods.go:89] "kube-apiserver-bridge-648600" [49a7a37c-ed2d-42f8-9ebd-8e0fc25dfad8] Running
	I1210 07:27:01.793715    4804 system_pods.go:89] "kube-controller-manager-bridge-648600" [f0f8b824-6039-4158-99c3-0fbbb537fb95] Running
	I1210 07:27:01.793715    4804 system_pods.go:89] "kube-proxy-rvxdz" [0b48ba7e-6574-4dbf-9a6f-0cf235bd2b59] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:27:01.793715    4804 system_pods.go:89] "kube-scheduler-bridge-648600" [2fa395e0-512f-44f2-a6e6-d31024c11c77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:27:01.793715    4804 system_pods.go:89] "storage-provisioner" [f3e50e94-3f5d-4c96-9aec-ba2e61f60ec0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:27:01.793715    4804 retry.go:31] will retry after 366.083663ms: missing components: kube-dns, kube-proxy
	I1210 07:27:02.170025    4804 system_pods.go:86] 8 kube-system pods found
	I1210 07:27:02.170118    4804 system_pods.go:89] "coredns-66bc5c9577-drdxd" [cf0185d0-fabc-4045-82ae-1985176ac7d2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:02.170118    4804 system_pods.go:89] "coredns-66bc5c9577-w2ff8" [d3ef0b65-8051-4f98-8a05-4b512fed48f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:02.170158    4804 system_pods.go:89] "etcd-bridge-648600" [d1be9de0-1547-4bea-96a2-6c6f18b0b5c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:27:02.170158    4804 system_pods.go:89] "kube-apiserver-bridge-648600" [49a7a37c-ed2d-42f8-9ebd-8e0fc25dfad8] Running
	I1210 07:27:02.170158    4804 system_pods.go:89] "kube-controller-manager-bridge-648600" [f0f8b824-6039-4158-99c3-0fbbb537fb95] Running
	I1210 07:27:02.170158    4804 system_pods.go:89] "kube-proxy-rvxdz" [0b48ba7e-6574-4dbf-9a6f-0cf235bd2b59] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:27:02.170158    4804 system_pods.go:89] "kube-scheduler-bridge-648600" [2fa395e0-512f-44f2-a6e6-d31024c11c77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:27:02.170158    4804 system_pods.go:89] "storage-provisioner" [f3e50e94-3f5d-4c96-9aec-ba2e61f60ec0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:27:02.170263    4804 retry.go:31] will retry after 379.768039ms: missing components: kube-dns, kube-proxy
	I1210 07:27:02.560125    4804 system_pods.go:86] 8 kube-system pods found
	I1210 07:27:02.560125    4804 system_pods.go:89] "coredns-66bc5c9577-drdxd" [cf0185d0-fabc-4045-82ae-1985176ac7d2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:02.560125    4804 system_pods.go:89] "coredns-66bc5c9577-w2ff8" [d3ef0b65-8051-4f98-8a05-4b512fed48f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:02.560125    4804 system_pods.go:89] "etcd-bridge-648600" [d1be9de0-1547-4bea-96a2-6c6f18b0b5c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:27:02.560125    4804 system_pods.go:89] "kube-apiserver-bridge-648600" [49a7a37c-ed2d-42f8-9ebd-8e0fc25dfad8] Running
	I1210 07:27:02.560125    4804 system_pods.go:89] "kube-controller-manager-bridge-648600" [f0f8b824-6039-4158-99c3-0fbbb537fb95] Running
	I1210 07:27:02.560125    4804 system_pods.go:89] "kube-proxy-rvxdz" [0b48ba7e-6574-4dbf-9a6f-0cf235bd2b59] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:27:02.560125    4804 system_pods.go:89] "kube-scheduler-bridge-648600" [2fa395e0-512f-44f2-a6e6-d31024c11c77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:27:02.560125    4804 system_pods.go:89] "storage-provisioner" [f3e50e94-3f5d-4c96-9aec-ba2e61f60ec0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:27:02.560125    4804 retry.go:31] will retry after 606.226493ms: missing components: kube-dns, kube-proxy
	I1210 07:27:03.174432    4804 system_pods.go:86] 8 kube-system pods found
	I1210 07:27:03.174459    4804 system_pods.go:89] "coredns-66bc5c9577-drdxd" [cf0185d0-fabc-4045-82ae-1985176ac7d2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:03.174529    4804 system_pods.go:89] "coredns-66bc5c9577-w2ff8" [d3ef0b65-8051-4f98-8a05-4b512fed48f9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:03.174556    4804 system_pods.go:89] "etcd-bridge-648600" [d1be9de0-1547-4bea-96a2-6c6f18b0b5c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 07:27:03.174556    4804 system_pods.go:89] "kube-apiserver-bridge-648600" [49a7a37c-ed2d-42f8-9ebd-8e0fc25dfad8] Running
	I1210 07:27:03.174556    4804 system_pods.go:89] "kube-controller-manager-bridge-648600" [f0f8b824-6039-4158-99c3-0fbbb537fb95] Running
	I1210 07:27:03.174556    4804 system_pods.go:89] "kube-proxy-rvxdz" [0b48ba7e-6574-4dbf-9a6f-0cf235bd2b59] Running
	I1210 07:27:03.174556    4804 system_pods.go:89] "kube-scheduler-bridge-648600" [2fa395e0-512f-44f2-a6e6-d31024c11c77] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 07:27:03.174556    4804 system_pods.go:89] "storage-provisioner" [f3e50e94-3f5d-4c96-9aec-ba2e61f60ec0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:27:03.174636    4804 system_pods.go:126] duration metric: took 1.6586915s to wait for k8s-apps to be running ...
	I1210 07:27:03.174636    4804 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 07:27:03.180714    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:27:03.200565    4804 system_svc.go:56] duration metric: took 25.9286ms WaitForService to wait for kubelet
	I1210 07:27:03.200565    4804 kubeadm.go:587] duration metric: took 3.9237365s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:27:03.200565    4804 node_conditions.go:102] verifying NodePressure condition ...
	I1210 07:27:03.206647    4804 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1210 07:27:03.206647    4804 node_conditions.go:123] node cpu capacity is 16
	I1210 07:27:03.206647    4804 node_conditions.go:105] duration metric: took 6.0818ms to run NodePressure ...
	I1210 07:27:03.206647    4804 start.go:242] waiting for startup goroutines ...
	I1210 07:27:03.206647    4804 start.go:247] waiting for cluster config update ...
	I1210 07:27:03.206647    4804 start.go:256] writing updated cluster config ...
	I1210 07:27:03.211392    4804 ssh_runner.go:195] Run: rm -f paused
	I1210 07:27:03.219165    4804 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:27:03.225327    4804 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-drdxd" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:03.829804    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:27:03.853756    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 07:27:03.878736    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:27:03.906508    8148 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 07:27:04.058384    8148 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 07:27:04.211214    8148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:27:04.351382    8148 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 07:27:04.377872    8148 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 07:27:04.403396    8148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:27:04.549162    8148 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 07:27:04.679128    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:27:04.700666    8148 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 07:27:04.707017    8148 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 07:27:04.714385    8148 start.go:564] Will wait 60s for crictl version
	I1210 07:27:04.718390    8148 ssh_runner.go:195] Run: which crictl
	I1210 07:27:04.728384    8148 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:27:04.771885    8148 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 07:27:04.775508    8148 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:27:04.821682    8148 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:27:04.862789    8148 out.go:252] * Preparing Kubernetes v1.34.3 on Docker 29.1.2 ...
	I1210 07:27:04.865761    8148 cli_runner.go:164] Run: docker exec -t kubenet-648600 dig +short host.docker.internal
	I1210 07:27:05.001510    8148 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 07:27:05.005662    8148 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 07:27:05.015254    8148 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:27:05.035518    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:27:05.091651    8148 kubeadm.go:884] updating cluster {Name:kubenet-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kubenet-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:27:05.092335    8148 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:27:05.098114    8148 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 07:27:05.143060    8148 docker.go:691] Got preloaded images: 
	I1210 07:27:05.143094    8148 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.3 wasn't preloaded
	I1210 07:27:05.143094    8148 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.3 registry.k8s.io/kube-controller-manager:v1.34.3 registry.k8s.io/kube-scheduler:v1.34.3 registry.k8s.io/kube-proxy:v1.34.3 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 07:27:05.156578    8148 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:27:05.159580    8148 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:27:05.163577    8148 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 07:27:05.164593    8148 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:27:05.167594    8148 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:27:05.168579    8148 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:27:05.172590    8148 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:27:05.172590    8148 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 07:27:05.176595    8148 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:27:05.176595    8148 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:27:05.180596    8148 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:27:05.180596    8148 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:27:05.185592    8148 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:27:05.185592    8148 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:27:05.187578    8148 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:27:05.192578    8148 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	W1210 07:27:05.220575    8148 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:27:05.269574    8148 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:27:05.317615    8148 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:27:05.371485    8148 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:27:05.423045    8148 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.12.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:27:05.475036    8148 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:27:05.525315    8148 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:27:05.575962    8148 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 07:27:05.683131    8148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:27:05.684679    8148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:27:05.686876    8148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 07:27:05.715167    8148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:27:05.726748    8148 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.3" does not exist at hash "5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942" in container runtime
	I1210 07:27:05.726748    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:27:05.726748    8148 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.3" does not exist at hash "aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c" in container runtime
	I1210 07:27:05.726748    8148 docker.go:338] Removing image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:27:05.726748    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:27:05.726748    8148 docker.go:338] Removing image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:27:05.733086    8148 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 07:27:05.733086    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:27:05.733086    8148 docker.go:338] Removing image: registry.k8s.io/pause:3.10.1
	I1210 07:27:05.733086    8148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:27:05.733086    8148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:27:05.736695    8148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1210 07:27:05.743504    8148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 07:27:05.780383    8148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:27:05.786007    8148 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1210 07:27:05.786007    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:27:05.786007    8148 docker.go:338] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:27:05.791803    8148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:27:05.797754    8148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:27:05.869665    8148 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:27:05.869665    8148 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:27:05.869665    8148 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:27:05.878211    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:27:05.879080    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:27:05.879124    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 07:27:05.884965    8148 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 07:27:05.884965    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:27:05.885022    8148 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:27:05.891293    8148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:27:05.910745    8148 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:27:05.964950    8148 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.3" does not exist at hash "aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78" in container runtime
	I1210 07:27:05.965026    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:27:05.965105    8148 docker.go:338] Removing image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:27:05.971845    8148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:27:05.982670    8148 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.3" needs transfer: "registry.k8s.io/kube-proxy:v1.34.3" does not exist at hash "36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691" in container runtime
	I1210 07:27:05.982670    8148 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.3': No such file or directory
	I1210 07:27:05.982670    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:27:05.982670    8148 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:27:05.982670    8148 docker.go:338] Removing image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:27:05.982670    8148 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.3': No such file or directory
	I1210 07:27:05.982670    8148 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 07:27:05.982670    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 --> /var/lib/minikube/images/kube-apiserver_v1.34.3 (27075584 bytes)
	I1210 07:27:05.982670    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 --> /var/lib/minikube/images/kube-controller-manager_v1.34.3 (22830080 bytes)
	I1210 07:27:05.982670    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 07:27:05.987758    8148 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:27:05.989944    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:27:06.087065    8148 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 07:27:06.087065    8148 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:27:06.087065    8148 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:27:06.087065    8148 docker.go:338] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:27:06.090779    8148 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:27:06.094460    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:27:06.177286    8148 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:27:06.177286    8148 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1210 07:27:06.177286    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1210 07:27:06.182289    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:27:06.201814    8148 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:27:06.205794    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:27:06.214794    8148 docker.go:305] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 07:27:06.214794    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1210 07:27:06.387866    8148 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 07:27:06.387866    8148 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:27:06.387866    8148 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.3': No such file or directory
	I1210 07:27:06.387866    8148 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.3': No such file or directory
	I1210 07:27:06.387866    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 07:27:06.387866    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 --> /var/lib/minikube/images/kube-proxy_v1.34.3 (25966592 bytes)
	I1210 07:27:06.387866    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 --> /var/lib/minikube/images/kube-scheduler_v1.34.3 (17393664 bytes)
	I1210 07:27:06.394865    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:27:06.469869    8148 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 from cache
	I1210 07:27:06.469869    8148 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 07:27:06.470873    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 07:27:07.386922    8148 docker.go:305] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:27:07.386922    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1210 07:27:08.295994    8148 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I1210 07:27:08.295994    8148 docker.go:305] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:27:08.295994    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.34.3 | docker load"
	W1210 07:27:05.249574    4804 pod_ready.go:104] pod "coredns-66bc5c9577-drdxd" is not "Ready", error: <nil>
	W1210 07:27:07.251932    4804 pod_ready.go:104] pod "coredns-66bc5c9577-drdxd" is not "Ready", error: <nil>
	I1210 07:27:10.869615    8148 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.34.3 | docker load": (2.5735513s)
	I1210 07:27:10.869615    8148 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 from cache
	I1210 07:27:10.869615    8148 docker.go:305] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:27:10.869615    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.34.3 | docker load"
	I1210 07:27:12.142371    8148 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.34.3 | docker load": (1.272692s)
	I1210 07:27:12.142438    8148 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 from cache
	I1210 07:27:12.142487    8148 docker.go:305] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:27:12.142539    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.12.1 | docker load"
	W1210 07:27:09.737974    4804 pod_ready.go:104] pod "coredns-66bc5c9577-drdxd" is not "Ready", error: <nil>
	W1210 07:27:11.751244    4804 pod_ready.go:104] pod "coredns-66bc5c9577-drdxd" is not "Ready", error: <nil>
	I1210 07:27:17.659733    8148 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.12.1 | docker load": (5.5170523s)
	I1210 07:27:17.659733    8148 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 from cache
	I1210 07:27:17.659733    8148 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:27:17.659733    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	W1210 07:27:14.889752    4804 pod_ready.go:104] pod "coredns-66bc5c9577-drdxd" is not "Ready", error: <nil>
	I1210 07:27:16.231616    4804 pod_ready.go:99] pod "coredns-66bc5c9577-drdxd" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-drdxd" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-drdxd" not found
	I1210 07:27:16.231616    4804 pod_ready.go:86] duration metric: took 13.0060841s for pod "coredns-66bc5c9577-drdxd" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:16.231616    4804 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w2ff8" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 07:27:18.243104    4804 pod_ready.go:104] pod "coredns-66bc5c9577-w2ff8" is not "Ready", error: <nil>
	I1210 07:27:20.501640    8148 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (2.8418624s)
	I1210 07:27:20.501640    8148 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1210 07:27:20.501640    8148 docker.go:305] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:27:20.501640    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.34.3 | docker load"
	I1210 07:27:21.955780    8148 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.34.3 | docker load": (1.4541176s)
	I1210 07:27:21.955780    8148 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 from cache
	I1210 07:27:21.955780    8148 docker.go:305] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:27:21.955780    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.34.3 | docker load"
	W1210 07:27:20.244187    4804 pod_ready.go:104] pod "coredns-66bc5c9577-w2ff8" is not "Ready", error: <nil>
	W1210 07:27:22.743434    4804 pod_ready.go:104] pod "coredns-66bc5c9577-w2ff8" is not "Ready", error: <nil>
	I1210 07:27:24.316577    8148 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.34.3 | docker load": (2.3607598s)
	I1210 07:27:24.316577    8148 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 from cache
	I1210 07:27:24.316577    8148 cache_images.go:125] Successfully loaded all cached images
	I1210 07:27:24.316577    8148 cache_images.go:94] duration metric: took 19.1731423s to LoadCachedImages
	I1210 07:27:24.316577    8148 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 docker true true} ...
	I1210 07:27:24.316577    8148 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubenet-648600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:kubenet-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:27:24.321252    8148 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 07:27:24.396301    8148 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1210 07:27:24.396301    8148 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:27:24.396301    8148 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-648600 NodeName:kubenet-648600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:27:24.396301    8148 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-648600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:27:24.400789    8148 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 07:27:24.413615    8148 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.3': No such file or directory
	
	Initiating transfer...
	I1210 07:27:24.420562    8148 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.3
	I1210 07:27:24.433705    8148 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 07:27:24.433705    8148 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet.sha256
	I1210 07:27:24.433705    8148 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
	I1210 07:27:24.439326    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:27:24.439990    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm
	I1210 07:27:24.440097    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl
	I1210 07:27:24.459701    8148 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubectl': No such file or directory
	I1210 07:27:24.459701    8148 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubeadm': No such file or directory
	I1210 07:27:24.459701    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubectl --> /var/lib/minikube/binaries/v1.34.3/kubectl (60563640 bytes)
	I1210 07:27:24.459701    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubeadm --> /var/lib/minikube/binaries/v1.34.3/kubeadm (74027192 bytes)
	I1210 07:27:24.464028    8148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet
	I1210 07:27:24.478444    8148 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubelet': No such file or directory
	I1210 07:27:24.478444    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubelet --> /var/lib/minikube/binaries/v1.34.3/kubelet (59203876 bytes)
	I1210 07:27:26.366215    8148 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:27:26.381210    8148 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (338 bytes)
	I1210 07:27:26.402108    8148 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 07:27:26.421679    8148 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1210 07:27:26.446789    8148 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:27:26.453858    8148 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:27:26.473492    8148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:27:26.611727    8148 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:27:26.633782    8148 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600 for IP: 192.168.76.2
	I1210 07:27:26.633782    8148 certs.go:195] generating shared ca certs ...
	I1210 07:27:26.633782    8148 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:27:26.634686    8148 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 07:27:26.634965    8148 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 07:27:26.635033    8148 certs.go:257] generating profile certs ...
	I1210 07:27:26.635033    8148 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\client.key
	I1210 07:27:26.635033    8148 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\client.crt with IP's: []
	I1210 07:27:26.716276    8148 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\client.crt ...
	I1210 07:27:26.716276    8148 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\client.crt: {Name:mk02489e14eca5a7daf32070f5a9d62031c71ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:27:26.717274    8148 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\client.key ...
	I1210 07:27:26.717274    8148 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\client.key: {Name:mkeee5be306abd033b56aba0cd7f1437696b5d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:27:26.718610    8148 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.key.785db377
	I1210 07:27:26.719151    8148 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.crt.785db377 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 07:27:26.796439    8148 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.crt.785db377 ...
	I1210 07:27:26.796439    8148 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.crt.785db377: {Name:mk334d88b1581e29df7bfa117bfc64a88d82a6f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:27:26.797603    8148 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.key.785db377 ...
	I1210 07:27:26.797603    8148 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.key.785db377: {Name:mkb37914eaff81ec29f8166cb1744ed358d062f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:27:26.798758    8148 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.crt.785db377 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.crt
	I1210 07:27:26.812046    8148 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.key.785db377 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.key
	I1210 07:27:26.812968    8148 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\proxy-client.key
	I1210 07:27:26.812968    8148 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\proxy-client.crt with IP's: []
	I1210 07:27:26.850220    8148 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\proxy-client.crt ...
	I1210 07:27:26.850220    8148 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\proxy-client.crt: {Name:mke528aabf4c458c4ee7e7f83cf38c91aa7bd3e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:27:26.851417    8148 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\proxy-client.key ...
	I1210 07:27:26.851417    8148 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\proxy-client.key: {Name:mk77fbad064846c54863bab29158ecadc03ea553 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:27:26.864338    8148 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 07:27:26.864780    8148 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 07:27:26.864780    8148 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 07:27:26.864780    8148 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 07:27:26.864780    8148 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 07:27:26.865410    8148 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 07:27:26.865501    8148 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 07:27:26.866122    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:27:26.900446    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:27:26.930968    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:27:26.958200    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:27:26.988784    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 07:27:27.022629    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:27:27.056583    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:27:27.085155    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-648600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:27:27.115053    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 07:27:27.150696    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:27:27.185123    8148 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 07:27:27.216567    8148 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:27:27.250850    8148 ssh_runner.go:195] Run: openssl version
	I1210 07:27:27.267552    8148 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 07:27:27.288656    8148 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 07:27:27.305368    8148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 07:27:27.314528    8148 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 07:27:27.320508    8148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 07:27:27.367783    8148 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:27:27.385141    8148 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/113042.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:27:27.404873    8148 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:27:27.425284    8148 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:27:27.443965    8148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:27:27.454282    8148 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:27:27.458396    8148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:27:27.506425    8148 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:27:27.526530    8148 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:27:27.543298    8148 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 07:27:27.561927    8148 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 07:27:27.581244    8148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 07:27:27.590435    8148 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 07:27:27.594883    8148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 07:27:27.642220    8148 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:27:27.663487    8148 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11304.pem /etc/ssl/certs/51391683.0
	I1210 07:27:27.686581    8148 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:27:27.694193    8148 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:27:27.694507    8148 kubeadm.go:401] StartCluster: {Name:kubenet-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kubenet-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:27:27.698413    8148 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 07:27:27.737485    8148 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:27:27.756360    8148 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:27:27.771482    8148 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:27:27.775635    8148 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:27:27.789286    8148 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:27:27.789286    8148 kubeadm.go:158] found existing configuration files:
	
	I1210 07:27:27.794294    8148 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:27:27.808089    8148 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:27:27.812410    8148 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:27:27.832458    8148 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:27:27.846970    8148 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:27:27.850863    8148 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:27:27.868609    8148 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:27:27.885013    8148 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:27:27.891495    8148 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:27:27.909197    8148 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:27:27.922667    8148 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:27:27.927530    8148 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:27:27.944790    8148 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:27:28.058922    8148 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 07:27:28.064312    8148 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1210 07:27:28.172527    8148 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1210 07:27:24.746112    4804 pod_ready.go:104] pod "coredns-66bc5c9577-w2ff8" is not "Ready", error: <nil>
	I1210 07:27:27.243122    4804 pod_ready.go:94] pod "coredns-66bc5c9577-w2ff8" is "Ready"
	I1210 07:27:27.243148    4804 pod_ready.go:86] duration metric: took 11.0113593s for pod "coredns-66bc5c9577-w2ff8" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:27.250850    4804 pod_ready.go:83] waiting for pod "etcd-bridge-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:27.259311    4804 pod_ready.go:94] pod "etcd-bridge-648600" is "Ready"
	I1210 07:27:27.259351    4804 pod_ready.go:86] duration metric: took 8.501ms for pod "etcd-bridge-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:27.264046    4804 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:27.273922    4804 pod_ready.go:94] pod "kube-apiserver-bridge-648600" is "Ready"
	I1210 07:27:27.273922    4804 pod_ready.go:86] duration metric: took 9.8554ms for pod "kube-apiserver-bridge-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:27.278592    4804 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:27.437083    4804 pod_ready.go:94] pod "kube-controller-manager-bridge-648600" is "Ready"
	I1210 07:27:27.437083    4804 pod_ready.go:86] duration metric: took 158.4885ms for pod "kube-controller-manager-bridge-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:27.639502    4804 pod_ready.go:83] waiting for pod "kube-proxy-rvxdz" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:28.037381    4804 pod_ready.go:94] pod "kube-proxy-rvxdz" is "Ready"
	I1210 07:27:28.037381    4804 pod_ready.go:86] duration metric: took 397.7826ms for pod "kube-proxy-rvxdz" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:28.237745    4804 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:28.637206    4804 pod_ready.go:94] pod "kube-scheduler-bridge-648600" is "Ready"
	I1210 07:27:28.637301    4804 pod_ready.go:86] duration metric: took 399.4761ms for pod "kube-scheduler-bridge-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:28.637301    4804 pod_ready.go:40] duration metric: took 25.4177367s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:27:28.739566    4804 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 07:27:28.742299    4804 out.go:179] * Done! kubectl is now configured to use "bridge-648600" cluster and "default" namespace by default
	I1210 07:27:30.845427    6232 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 07:27:30.845427    6232 kubeadm.go:319] 
	I1210 07:27:30.846026    6232 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:27:30.849126    6232 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1210 07:27:30.849126    6232 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:27:30.849126    6232 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:27:30.849730    6232 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1210 07:27:30.849899    6232 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1210 07:27:30.850054    6232 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1210 07:27:30.850170    6232 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1210 07:27:30.850377    6232 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1210 07:27:30.850502    6232 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1210 07:27:30.850528    6232 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1210 07:27:30.850528    6232 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1210 07:27:30.850528    6232 kubeadm.go:319] CONFIG_INET: enabled
	I1210 07:27:30.850528    6232 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1210 07:27:30.850528    6232 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1210 07:27:30.851207    6232 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1210 07:27:30.851387    6232 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1210 07:27:30.851479    6232 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1210 07:27:30.851479    6232 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1210 07:27:30.851479    6232 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1210 07:27:30.851479    6232 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1210 07:27:30.852012    6232 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1210 07:27:30.852150    6232 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1210 07:27:30.852150    6232 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1210 07:27:30.852150    6232 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1210 07:27:30.852150    6232 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1210 07:27:30.852734    6232 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1210 07:27:30.852734    6232 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1210 07:27:30.852734    6232 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1210 07:27:30.852734    6232 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1210 07:27:30.852734    6232 kubeadm.go:319] OS: Linux
	I1210 07:27:30.852734    6232 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:27:30.853345    6232 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:27:30.853498    6232 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:27:30.853705    6232 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:27:30.853932    6232 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:27:30.854096    6232 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:27:30.854096    6232 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:27:30.854096    6232 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:27:30.854096    6232 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:27:30.854096    6232 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:27:30.854761    6232 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:27:30.855081    6232 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:27:30.855238    6232 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:27:32.136934    6232 out.go:252]   - Generating certificates and keys ...
	I1210 07:27:32.137702    6232 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:27:32.137951    6232 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:27:32.138057    6232 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:27:32.138229    6232 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:27:32.138420    6232 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:27:32.138420    6232 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:27:32.138420    6232 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:27:32.138420    6232 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:27:32.138420    6232 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:27:32.138953    6232 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:27:32.139119    6232 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:27:32.139293    6232 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:27:32.139454    6232 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:27:32.139561    6232 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:27:32.139676    6232 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:27:32.139890    6232 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:27:32.139890    6232 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:27:32.139890    6232 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:27:32.139890    6232 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:27:32.176956    6232 out.go:252]   - Booting up control plane ...
	I1210 07:27:32.177131    6232 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:27:32.177131    6232 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:27:32.177131    6232 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:27:32.177675    6232 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:27:32.177887    6232 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:27:32.177947    6232 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:27:32.177947    6232 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:27:32.177947    6232 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:27:32.178633    6232 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:27:32.178747    6232 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:27:32.178747    6232 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00091283s
	I1210 07:27:32.178747    6232 kubeadm.go:319] 
	I1210 07:27:32.178747    6232 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:27:32.179272    6232 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:27:32.179465    6232 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:27:32.179465    6232 kubeadm.go:319] 
	I1210 07:27:32.180034    6232 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:27:32.180034    6232 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:27:32.180034    6232 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:27:32.180034    6232 kubeadm.go:319] 
	I1210 07:27:32.180034    6232 kubeadm.go:403] duration metric: took 8m5.1768914s to StartCluster
	I1210 07:27:32.180034    6232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:27:32.184805    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:27:32.252290    6232 cri.go:89] found id: ""
	I1210 07:27:32.252290    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.252290    6232 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:27:32.252290    6232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:27:32.257295    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:27:32.524390    6232 cri.go:89] found id: ""
	I1210 07:27:32.524390    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.524390    6232 logs.go:284] No container was found matching "etcd"
	I1210 07:27:32.524390    6232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:27:32.529570    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:27:32.574711    6232 cri.go:89] found id: ""
	I1210 07:27:32.574765    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.574765    6232 logs.go:284] No container was found matching "coredns"
	I1210 07:27:32.574765    6232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:27:32.579249    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:27:32.620467    6232 cri.go:89] found id: ""
	I1210 07:27:32.620543    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.620543    6232 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:27:32.620543    6232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:27:32.624698    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:27:32.678505    6232 cri.go:89] found id: ""
	I1210 07:27:32.678505    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.678505    6232 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:27:32.678505    6232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:27:32.683647    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:27:32.734494    6232 cri.go:89] found id: ""
	I1210 07:27:32.734494    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.734494    6232 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:27:32.734494    6232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:27:32.740109    6232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:27:32.782096    6232 cri.go:89] found id: ""
	I1210 07:27:32.782096    6232 logs.go:282] 0 containers: []
	W1210 07:27:32.782096    6232 logs.go:284] No container was found matching "kindnet"
	I1210 07:27:32.782096    6232 logs.go:123] Gathering logs for kubelet ...
	I1210 07:27:32.782096    6232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:27:32.848542    6232 logs.go:123] Gathering logs for dmesg ...
	I1210 07:27:32.848542    6232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:27:32.887692    6232 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:27:32.887692    6232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:27:32.974167    6232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:27:32.961911   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.962935   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.963846   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.967478   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.968591   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:27:32.961911   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.962935   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.963846   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.967478   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:32.968591   10643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:27:32.974167    6232 logs.go:123] Gathering logs for Docker ...
	I1210 07:27:32.974167    6232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:27:33.009144    6232 logs.go:123] Gathering logs for container status ...
	I1210 07:27:33.009144    6232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:27:33.065279    6232 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00091283s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:27:33.065279    6232 out.go:285] * 
	W1210 07:27:33.065279    6232 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00091283s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:27:33.065279    6232 out.go:285] * 
	W1210 07:27:33.067510    6232 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:27:33.666818    6232 out.go:203] 
	W1210 07:27:33.825573    6232 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-rc.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00091283s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:27:33.825573    6232 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:27:33.825573    6232 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:27:33.873675    6232 out.go:203] 
	I1210 07:27:44.564269    8148 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1210 07:27:44.564496    8148 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:27:44.564590    8148 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:27:44.564590    8148 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:27:44.564590    8148 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:27:44.565130    8148 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:27:44.576058    8148 out.go:252]   - Generating certificates and keys ...
	I1210 07:27:44.576583    8148 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:27:44.576640    8148 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:27:44.576640    8148 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:27:44.576640    8148 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:27:44.577178    8148 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:27:44.577276    8148 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:27:44.577458    8148 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:27:44.577711    8148 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kubenet-648600 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 07:27:44.577879    8148 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:27:44.577992    8148 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kubenet-648600 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 07:27:44.577992    8148 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:27:44.577992    8148 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:27:44.577992    8148 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:27:44.578523    8148 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:27:44.578664    8148 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:27:44.578901    8148 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:27:44.579181    8148 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:27:44.579396    8148 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:27:44.579613    8148 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:27:44.579821    8148 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:27:44.580117    8148 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:27:44.582631    8148 out.go:252]   - Booting up control plane ...
	I1210 07:27:44.582631    8148 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:27:44.582631    8148 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:27:44.583285    8148 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:27:44.583285    8148 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:27:44.583285    8148 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:27:44.584001    8148 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:27:44.584001    8148 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:27:44.584001    8148 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:27:44.584001    8148 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:27:44.584844    8148 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:27:44.584844    8148 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001745713s
	I1210 07:27:44.584844    8148 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 07:27:44.585477    8148 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1210 07:27:44.585556    8148 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 07:27:44.585556    8148 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 07:27:44.585556    8148 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.837135991s
	I1210 07:27:44.585556    8148 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.828792487s
	I1210 07:27:44.586239    8148 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.00259734s
	I1210 07:27:44.586486    8148 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 07:27:44.586760    8148 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 07:27:44.586760    8148 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 07:27:44.586760    8148 kubeadm.go:319] [mark-control-plane] Marking the node kubenet-648600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 07:27:44.587307    8148 kubeadm.go:319] [bootstrap-token] Using token: joeue6.2iprn5m7sigi1gfz
	I1210 07:27:44.591330    8148 out.go:252]   - Configuring RBAC rules ...
	I1210 07:27:44.591330    8148 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 07:27:44.591330    8148 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 07:27:44.591330    8148 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 07:27:44.592292    8148 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 07:27:44.592292    8148 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 07:27:44.592292    8148 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 07:27:44.592292    8148 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 07:27:44.592292    8148 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 07:27:44.592292    8148 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 07:27:44.592292    8148 kubeadm.go:319] 
	I1210 07:27:44.592292    8148 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 07:27:44.592292    8148 kubeadm.go:319] 
	I1210 07:27:44.592292    8148 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 07:27:44.593300    8148 kubeadm.go:319] 
	I1210 07:27:44.593300    8148 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 07:27:44.593300    8148 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 07:27:44.593300    8148 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 07:27:44.593300    8148 kubeadm.go:319] 
	I1210 07:27:44.593300    8148 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 07:27:44.593300    8148 kubeadm.go:319] 
	I1210 07:27:44.593300    8148 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 07:27:44.593300    8148 kubeadm.go:319] 
	I1210 07:27:44.593300    8148 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 07:27:44.593300    8148 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 07:27:44.593300    8148 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 07:27:44.593300    8148 kubeadm.go:319] 
	I1210 07:27:44.594296    8148 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 07:27:44.594296    8148 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 07:27:44.594296    8148 kubeadm.go:319] 
	I1210 07:27:44.594296    8148 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token joeue6.2iprn5m7sigi1gfz \
	I1210 07:27:44.594296    8148 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2501b4ef72a9792bee16677b10e6bb41bd6980e44395cdcd843b1bcd1dba3c3b \
	I1210 07:27:44.594296    8148 kubeadm.go:319] 	--control-plane 
	I1210 07:27:44.594296    8148 kubeadm.go:319] 
	I1210 07:27:44.594296    8148 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 07:27:44.594296    8148 kubeadm.go:319] 
	I1210 07:27:44.594296    8148 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token joeue6.2iprn5m7sigi1gfz \
	I1210 07:27:44.595293    8148 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2501b4ef72a9792bee16677b10e6bb41bd6980e44395cdcd843b1bcd1dba3c3b 
	I1210 07:27:44.595293    8148 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1210 07:27:44.595293    8148 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 07:27:44.600518    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:27:44.601179    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubenet-648600 minikube.k8s.io/updated_at=2025_12_10T07_27_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=kubenet-648600 minikube.k8s.io/primary=true
	I1210 07:27:44.614260    8148 ops.go:34] apiserver oom_adj: -16
	I1210 07:27:44.769760    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:27:45.270333    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:27:45.771254    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:27:46.270378    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:27:46.770109    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:27:47.269516    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:27:47.769641    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:27:48.269985    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:27:48.770952    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:27:49.270540    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:27:49.769804    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:27:49.876561    8148 kubeadm.go:1114] duration metric: took 5.2811845s to wait for elevateKubeSystemPrivileges
	I1210 07:27:49.876561    8148 kubeadm.go:403] duration metric: took 22.1817315s to StartCluster
	I1210 07:27:49.876561    8148 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:27:49.876561    8148 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:27:49.878534    8148 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:27:49.879541    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 07:27:49.879541    8148 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:27:49.879541    8148 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:27:49.879541    8148 addons.go:70] Setting default-storageclass=true in profile "kubenet-648600"
	I1210 07:27:49.879541    8148 addons.go:70] Setting storage-provisioner=true in profile "kubenet-648600"
	I1210 07:27:49.879541    8148 addons.go:239] Setting addon storage-provisioner=true in "kubenet-648600"
	I1210 07:27:49.879541    8148 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubenet-648600"
	I1210 07:27:49.879541    8148 config.go:182] Loaded profile config "kubenet-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:27:49.879541    8148 host.go:66] Checking if "kubenet-648600" exists ...
	I1210 07:27:49.882535    8148 out.go:179] * Verifying Kubernetes components...
	I1210 07:27:49.892532    8148 cli_runner.go:164] Run: docker container inspect kubenet-648600 --format={{.State.Status}}
	I1210 07:27:49.892532    8148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:27:49.892532    8148 cli_runner.go:164] Run: docker container inspect kubenet-648600 --format={{.State.Status}}
	I1210 07:27:49.953541    8148 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:27:49.953541    8148 addons.go:239] Setting addon default-storageclass=true in "kubenet-648600"
	I1210 07:27:49.953541    8148 host.go:66] Checking if "kubenet-648600" exists ...
	I1210 07:27:49.955536    8148 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:27:49.955536    8148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:27:49.960533    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:27:49.962542    8148 cli_runner.go:164] Run: docker container inspect kubenet-648600 --format={{.State.Status}}
	I1210 07:27:50.023526    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57306 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-648600\id_rsa Username:docker}
	I1210 07:27:50.036526    8148 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:27:50.036526    8148 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:27:50.040526    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:27:50.097247    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57306 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-648600\id_rsa Username:docker}
	I1210 07:27:50.183329    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 07:27:50.472950    8148 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:27:50.577835    8148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:27:50.578596    8148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:27:51.070634    8148 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1210 07:27:51.074383    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-648600
	I1210 07:27:51.137933    8148 node_ready.go:35] waiting up to 15m0s for node "kubenet-648600" to be "Ready" ...
	I1210 07:27:51.164337    8148 node_ready.go:49] node "kubenet-648600" is "Ready"
	I1210 07:27:51.164337    8148 node_ready.go:38] duration metric: took 26.4041ms for node "kubenet-648600" to be "Ready" ...
	I1210 07:27:51.164337    8148 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:27:51.170342    8148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:27:51.597991    8148 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kubenet-648600" context rescaled to 1 replicas
	I1210 07:27:51.899323    8148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.3206699s)
	I1210 07:27:51.899323    8148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.3214675s)
	I1210 07:27:51.899323    8148 api_server.go:72] duration metric: took 2.0197503s to wait for apiserver process to appear ...
	I1210 07:27:51.899323    8148 api_server.go:88] waiting for apiserver healthz status ...
	I1210 07:27:51.899323    8148 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57305/healthz ...
	I1210 07:27:51.910103    8148 api_server.go:279] https://127.0.0.1:57305/healthz returned 200:
	ok
	I1210 07:27:51.913678    8148 api_server.go:141] control plane version: v1.34.3
	I1210 07:27:51.913735    8148 api_server.go:131] duration metric: took 14.4123ms to wait for apiserver health ...
	I1210 07:27:51.913735    8148 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 07:27:51.925668    8148 system_pods.go:59] 8 kube-system pods found
	I1210 07:27:51.925668    8148 system_pods.go:61] "coredns-66bc5c9577-bcxpw" [f9fb08f5-4d8f-4e14-b8e9-9c0f6684a8aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:51.925668    8148 system_pods.go:61] "coredns-66bc5c9577-thbtj" [79866873-8cdd-45f7-9770-8da49cf79721] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:27:51.925668    8148 system_pods.go:61] "etcd-kubenet-648600" [699fbeee-9ecb-43fa-a7ea-c547b92b1d61] Running
	I1210 07:27:51.925668    8148 system_pods.go:61] "kube-apiserver-kubenet-648600" [d76e5dfb-f9e9-48be-9286-b6a9a70dd8ca] Running
	I1210 07:27:51.925668    8148 system_pods.go:61] "kube-controller-manager-kubenet-648600" [270949e8-bf6c-4cc8-a9a7-21509cca1c05] Running
	I1210 07:27:51.925668    8148 system_pods.go:61] "kube-proxy-b7jtk" [6bff5f4d-95c7-4f92-ada1-8707090b0009] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 07:27:51.925668    8148 system_pods.go:61] "kube-scheduler-kubenet-648600" [450815fe-00af-4d50-9615-48dd81b7b447] Running
	I1210 07:27:51.925668    8148 system_pods.go:61] "storage-provisioner" [71858c57-7b2d-4b77-a0f5-e40b61f0f4ba] Pending
	I1210 07:27:51.925668    8148 system_pods.go:74] duration metric: took 11.9322ms to wait for pod list to return data ...
	I1210 07:27:51.925668    8148 default_sa.go:34] waiting for default service account to be created ...
	I1210 07:27:51.926670    8148 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> Docker <==
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653477207Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653491208Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653496809Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653502209Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653531612Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.653569015Z" level=info msg="Initializing buildkit"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.846125896Z" level=info msg="Completed buildkit initialization"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.854786460Z" level=info msg="Daemon has completed initialization"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.855010880Z" level=info msg="API listen on [::]:2376"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.855019980Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 07:17:27 no-preload-099700 dockerd[1179]: time="2025-12-10T07:17:27.855177894Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 07:17:27 no-preload-099700 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 07:17:28 no-preload-099700 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Loaded network plugin cni"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 07:17:28 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:17:28Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 07:17:28 no-preload-099700 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 10 07:18:02 no-preload-099700 cri-dockerd[1471]: time="2025-12-10T07:18:02Z" level=info msg="Stop pulling image registry.k8s.io/etcd:3.6.6-0: Status: Downloaded newer image for registry.k8s.io/etcd:3.6.6-0"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:27:54.026172   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:54.027574   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:54.028987   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:54.030039   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:27:54.031358   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 07:27] CPU: 4 PID: 450214 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fc67e5a5b20
	[  +0.000006] Code: Unable to access opcode bytes at RIP 0x7fc67e5a5af6.
	[  +0.000001] RSP: 002b:00007fffb6f4ee10 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.822699] CPU: 6 PID: 450591 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7fa8d5a60b20
	[  +0.000006] Code: Unable to access opcode bytes at RIP 0x7fa8d5a60af6.
	[  +0.000001] RSP: 002b:00007ffd78f04e50 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[ +31.850979] tmpfs: Unknown parameter 'noswap'
	[  +9.313325] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 07:27:54 up  2:56,  0 user,  load average: 5.04, 5.34, 4.80
	Linux no-preload-099700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:27:51 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:27:51 no-preload-099700 kubelet[13392]: E1210 07:27:51.194401   13392 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:27:51 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:27:51 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:27:51 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 456.
	Dec 10 07:27:51 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:27:51 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:27:51 no-preload-099700 kubelet[13403]: E1210 07:27:51.951439   13403 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:27:51 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:27:51 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:27:52 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 457.
	Dec 10 07:27:52 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:27:52 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:27:52 no-preload-099700 kubelet[13428]: E1210 07:27:52.691132   13428 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:27:52 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:27:52 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:27:53 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 458.
	Dec 10 07:27:53 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:27:53 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:27:53 no-preload-099700 kubelet[13455]: E1210 07:27:53.439132   13455 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:27:53 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:27:53 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:27:54 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 459.
	Dec 10 07:27:54 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:27:54 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-099700 -n no-preload-099700
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-099700 -n no-preload-099700: exit status 6 (588.4252ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:27:55.098926    9256 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-099700" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-099700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (98.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (106.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-525200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1210 07:27:38.915346   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:27:42.023386   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-144100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-525200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m43.7362485s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_addons_e23971240287a88151a2b5edd52daaba3879ba4a_6.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-525200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-525200
helpers_test.go:244: (dbg) docker inspect newest-cni-525200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188",
	        "Created": "2025-12-10T07:18:58.277037255Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 386736,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:18:58.731857599Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188/hostname",
	        "HostsPath": "/var/lib/docker/containers/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188/hosts",
	        "LogPath": "/var/lib/docker/containers/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188-json.log",
	        "Name": "/newest-cni-525200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-525200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-525200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7e31d32294dc0b8b9ba09ebc0adce95d5ecde79d96faff02ddfcca2df2a49118-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7e31d32294dc0b8b9ba09ebc0adce95d5ecde79d96faff02ddfcca2df2a49118/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7e31d32294dc0b8b9ba09ebc0adce95d5ecde79d96faff02ddfcca2df2a49118/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7e31d32294dc0b8b9ba09ebc0adce95d5ecde79d96faff02ddfcca2df2a49118/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-525200",
	                "Source": "/var/lib/docker/volumes/newest-cni-525200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-525200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-525200",
	                "name.minikube.sigs.k8s.io": "newest-cni-525200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ee1da76fdf10ac9d4681072362e0cf44891c60757ab9c3416e1dbad070bcf47a",
	            "SandboxKey": "/var/run/docker/netns/ee1da76fdf10",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56385"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56386"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56387"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56383"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56384"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-525200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.121.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:79:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e73cdc5fd1be9396722947f498060ee7b5757251a78043b99e30abfea0ec658b",
	                    "EndpointID": "6249979e88a9b3e5e68a719fd3a78844751030cbdde0814c42ef0e5994cbd694",
	                    "Gateway": "192.168.121.1",
	                    "IPAddress": "192.168.121.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-525200",
	                        "6b7f9063cbda"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-525200 -n newest-cni-525200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-525200 -n newest-cni-525200: exit status 6 (586.366ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:29:21.962372   13064 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-525200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-525200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-525200 logs -n 25: (1.2280975s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │    PROFILE     │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-648600 sudo systemctl status kubelet --all --full --no-pager                                    │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo systemctl cat kubelet --no-pager                                                    │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo journalctl -xeu kubelet --all --full --no-pager                                     │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo cat /etc/kubernetes/kubelet.conf                                                    │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo cat /var/lib/kubelet/config.yaml                                                    │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo systemctl status docker --all --full --no-pager                                     │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo systemctl cat docker --no-pager                                                     │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo cat /etc/docker/daemon.json                                                         │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo docker system info                                                                  │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo systemctl status cri-docker --all --full --no-pager                                 │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo systemctl cat cri-docker --no-pager                                                 │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                            │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo cat /usr/lib/systemd/system/cri-docker.service                                      │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo cri-dockerd --version                                                               │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo systemctl status containerd --all --full --no-pager                                 │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo systemctl cat containerd --no-pager                                                 │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo cat /lib/systemd/system/containerd.service                                          │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo cat /etc/containerd/config.toml                                                     │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo containerd config dump                                                              │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo systemctl status crio --all --full --no-pager                                       │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │                     │
	│ ssh     │ -p kubenet-648600 sudo systemctl cat crio --no-pager                                                       │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                             │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ ssh     │ -p kubenet-648600 sudo crio config                                                                         │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ delete  │ -p kubenet-648600                                                                                          │ kubenet-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │ 10 Dec 25 07:29 UTC │
	│ start   │ -p false-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker │ false-648600   │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:29 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:29:22
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:29:22.052387    6324 out.go:360] Setting OutFile to fd 1276 ...
	I1210 07:29:22.099393    6324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:29:22.099393    6324 out.go:374] Setting ErrFile to fd 1260...
	I1210 07:29:22.099393    6324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:29:22.116402    6324 out.go:368] Setting JSON to false
	I1210 07:29:22.120379    6324 start.go:133] hostinfo: {"hostname":"minikube4","uptime":10694,"bootTime":1765341068,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 07:29:22.120379    6324 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 07:29:22.126393    6324 out.go:179] * [false-648600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 07:29:22.129388    6324 notify.go:221] Checking for updates...
	I1210 07:29:22.131378    6324 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:29:22.134394    6324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:29:22.136378    6324 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 07:29:22.138392    6324 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:29:22.141389    6324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	
	==> Docker <==
	Dec 10 07:19:16 newest-cni-525200 systemd[1]: Starting docker.service - Docker Application Container Engine...
	Dec 10 07:19:16 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:16.062791191Z" level=info msg="Starting up"
	Dec 10 07:19:16 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:16.084601896Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	Dec 10 07:19:16 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:16.084748710Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	Dec 10 07:19:16 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:16.084762511Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	Dec 10 07:19:16 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:16.101611637Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	Dec 10 07:19:16 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:16.245015073Z" level=info msg="Loading containers: start."
	Dec 10 07:19:16 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:16.245162987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.400213681Z" level=info msg="Restoring containers: start."
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.481783615Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.531401619Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.874477245Z" level=info msg="Loading containers: done."
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.923622004Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.923705712Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.923715913Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.923722613Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.923729214Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.923757017Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 07:19:22 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:22.923825523Z" level=info msg="Initializing buildkit"
	Dec 10 07:19:23 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:23.052360909Z" level=info msg="Completed buildkit initialization"
	Dec 10 07:19:23 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:23.059794414Z" level=info msg="Daemon has completed initialization"
	Dec 10 07:19:23 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:23.060067240Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 07:19:23 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:23.060194252Z" level=info msg="API listen on [::]:2376"
	Dec 10 07:19:23 newest-cni-525200 dockerd[1637]: time="2025-12-10T07:19:23.060089142Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 07:19:23 newest-cni-525200 systemd[1]: Started docker.service - Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:29:23.096408   13012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:29:23.097635   13012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:29:23.098651   13012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:29:23.099892   13012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:29:23.100987   13012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +7.317488] CPU: 6 PID: 462690 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f4e5eb4db20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f4e5eb4daf6.
	[  +0.000001] RSP: 002b:00007ffd363f4560 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.854729] CPU: 2 PID: 462840 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f6e5826cb20
	[  +0.000006] Code: Unable to access opcode bytes at RIP 0x7f6e5826caf6.
	[  +0.000001] RSP: 002b:00007fffdc097450 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 07:29] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 07:29:23 up  2:57,  0 user,  load average: 7.64, 5.71, 4.96
	Linux newest-cni-525200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:29:19 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:29:20 newest-cni-525200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 461.
	Dec 10 07:29:20 newest-cni-525200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:29:20 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:29:20 newest-cni-525200 kubelet[12830]: E1210 07:29:20.708300   12830 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:29:20 newest-cni-525200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:29:20 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:29:21 newest-cni-525200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 462.
	Dec 10 07:29:21 newest-cni-525200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:29:21 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:29:21 newest-cni-525200 kubelet[12856]: E1210 07:29:21.450348   12856 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:29:21 newest-cni-525200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:29:21 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:29:22 newest-cni-525200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 463.
	Dec 10 07:29:22 newest-cni-525200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:29:22 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:29:22 newest-cni-525200 kubelet[12884]: E1210 07:29:22.187570   12884 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:29:22 newest-cni-525200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:29:22 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:29:22 newest-cni-525200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 464.
	Dec 10 07:29:22 newest-cni-525200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:29:22 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:29:22 newest-cni-525200 kubelet[12965]: E1210 07:29:22.954262   12965 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:29:22 newest-cni-525200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:29:22 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-525200 -n newest-cni-525200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-525200 -n newest-cni-525200: exit status 6 (820.3139ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:29:24.010670    7940 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-525200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-525200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (106.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (378.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-099700 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-rc.1
E1210 07:28:02.342595   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:28:09.640330   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:28:10.195158   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:28:10.203151   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:28:10.216149   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:28:10.239156   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:28:10.282152   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:28:10.364547   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:28:10.527455   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:28:10.850240   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:28:11.492678   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:28:12.774581   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:28:15.336827   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-099700 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-rc.1: exit status 80 (6m14.8694022s)

                                                
                                                
-- stdout --
	* [no-preload-099700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "no-preload-099700" primary control-plane node in "no-preload-099700" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:27:57.723065    6044 out.go:360] Setting OutFile to fd 1436 ...
	I1210 07:27:57.786120    6044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:27:57.786120    6044 out.go:374] Setting ErrFile to fd 1652...
	I1210 07:27:57.786120    6044 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:27:57.800126    6044 out.go:368] Setting JSON to false
	I1210 07:27:57.803131    6044 start.go:133] hostinfo: {"hostname":"minikube4","uptime":10609,"bootTime":1765341068,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 07:27:57.803131    6044 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 07:27:57.807139    6044 out.go:179] * [no-preload-099700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 07:27:57.809135    6044 notify.go:221] Checking for updates...
	I1210 07:27:57.812128    6044 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:27:57.814120    6044 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:27:57.817119    6044 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 07:27:57.819122    6044 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:27:57.821120    6044 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:27:57.824126    6044 config.go:182] Loaded profile config "no-preload-099700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:27:57.825133    6044 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:27:57.958549    6044 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 07:27:57.961530    6044 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:27:58.233674    6044 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:27:58.209052816 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:27:58.237672    6044 out.go:179] * Using the docker driver based on existing profile
	I1210 07:27:58.239676    6044 start.go:309] selected driver: docker
	I1210 07:27:58.239676    6044 start.go:927] validating driver "docker" against &{Name:no-preload-099700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-099700 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:27:58.239676    6044 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:27:58.294674    6044 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:27:58.531916    6044 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:27:58.511559598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:27:58.531916    6044 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:27:58.531916    6044 cni.go:84] Creating CNI manager for ""
	I1210 07:27:58.531916    6044 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 07:27:58.532912    6044 start.go:353] cluster config:
	{Name:no-preload-099700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-099700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:27:58.535911    6044 out.go:179] * Starting "no-preload-099700" primary control-plane node in "no-preload-099700" cluster
	I1210 07:27:58.537915    6044 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 07:27:58.539937    6044 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:27:58.542915    6044 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 07:27:58.542915    6044 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 07:27:58.542915    6044 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\config.json ...
	I1210 07:27:58.542915    6044 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:27:58.543918    6044 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-rc.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-rc.1
	I1210 07:27:58.543918    6044 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-rc.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-rc.1
	I1210 07:27:58.543918    6044 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-rc.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-rc.1
	I1210 07:27:58.543918    6044 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-rc.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-rc.1
	I1210 07:27:58.543918    6044 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:27:58.543918    6044 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:27:58.543918    6044 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1210 07:27:58.723512    6044 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:27:58.723512    6044 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 07:27:58.723512    6044 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:27:58.723512    6044 start.go:360] acquireMachinesLock for no-preload-099700: {Name:mkc8e995140dc54401ffafd9be7c06a8281abfd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:27:58.723512    6044 start.go:364] duration metric: took 0s to acquireMachinesLock for "no-preload-099700"
	I1210 07:27:58.723512    6044 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:27:58.723512    6044 fix.go:54] fixHost starting: 
	I1210 07:27:58.748274    6044 cli_runner.go:164] Run: docker container inspect no-preload-099700 --format={{.State.Status}}
	I1210 07:27:58.925025    6044 fix.go:112] recreateIfNeeded on no-preload-099700: state=Stopped err=<nil>
	W1210 07:27:58.925025    6044 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:27:58.935025    6044 out.go:252] * Restarting existing docker container for "no-preload-099700" ...
	I1210 07:27:58.940027    6044 cli_runner.go:164] Run: docker start no-preload-099700
	I1210 07:28:00.626007    6044 cli_runner.go:217] Completed: docker start no-preload-099700: (1.6859542s)
	I1210 07:28:00.637429    6044 cli_runner.go:164] Run: docker container inspect no-preload-099700 --format={{.State.Status}}
	I1210 07:28:00.742344    6044 kic.go:430] container "no-preload-099700" state is running.
	I1210 07:28:00.751370    6044 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-099700
	I1210 07:28:00.838110    6044 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\config.json ...
	I1210 07:28:00.840111    6044 machine.go:94] provisionDockerMachine start ...
	I1210 07:28:00.846114    6044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:28:00.934170    6044 main.go:143] libmachine: Using SSH client type: native
	I1210 07:28:00.935153    6044 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57436 <nil> <nil>}
	I1210 07:28:00.935153    6044 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:28:00.938167    6044 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:28:01.913376    6044 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:28:01.914378    6044 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1210 07:28:01.914378    6044 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.3704074s
	I1210 07:28:01.914378    6044 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1210 07:28:01.916381    6044 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:28:01.917377    6044 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1210 07:28:01.917377    6044 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 3.3734061s
	I1210 07:28:01.917377    6044 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1210 07:28:01.922380    6044 cache.go:107] acquiring lock: {Name:mkcf25f639af7f4007c4b4fab61572d5959a6d86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:28:01.922380    6044 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-rc.1 exists
	I1210 07:28:01.922380    6044 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-rc.1" took 3.3784096s
	I1210 07:28:01.922380    6044 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-rc.1 succeeded
	I1210 07:28:01.924377    6044 cache.go:107] acquiring lock: {Name:mkbb0c8fa4da62a80ed9d6679bee657142469def Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:28:01.924377    6044 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-rc.1 exists
	I1210 07:28:01.924377    6044 cache.go:107] acquiring lock: {Name:mk732492e3e0368b966de7b10f5eb5a7a6586537 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:28:01.924377    6044 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-rc.1" took 3.3804064s
	I1210 07:28:01.924377    6044 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-rc.1 succeeded
	I1210 07:28:01.924377    6044 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-rc.1 exists
	I1210 07:28:01.925382    6044 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-rc.1" took 3.3814109s
	I1210 07:28:01.925382    6044 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-rc.1 succeeded
	I1210 07:28:01.930389    6044 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:28:01.930389    6044 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1210 07:28:01.930389    6044 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.3864179s
	I1210 07:28:01.930389    6044 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1210 07:28:01.984121    6044 cache.go:107] acquiring lock: {Name:mk16b9d3dcf33fab9768fe75991ea4fd479f5b62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:28:01.984121    6044 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-rc.1 exists
	I1210 07:28:01.984121    6044 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-rc.1" took 3.440149s
	I1210 07:28:01.984121    6044 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-rc.1 succeeded
	I1210 07:28:02.006116    6044 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:28:02.007116    6044 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1210 07:28:02.007116    6044 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.4631435s
	I1210 07:28:02.007116    6044 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1210 07:28:02.007116    6044 cache.go:87] Successfully saved all images to host disk.
	I1210 07:28:04.113382    6044 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-099700
	
	I1210 07:28:04.113382    6044 ubuntu.go:182] provisioning hostname "no-preload-099700"
	I1210 07:28:04.116392    6044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:28:04.181490    6044 main.go:143] libmachine: Using SSH client type: native
	I1210 07:28:04.181490    6044 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57436 <nil> <nil>}
	I1210 07:28:04.181490    6044 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-099700 && echo "no-preload-099700" | sudo tee /etc/hostname
	I1210 07:28:04.382339    6044 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-099700
	
	I1210 07:28:04.387475    6044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:28:04.444121    6044 main.go:143] libmachine: Using SSH client type: native
	I1210 07:28:04.445114    6044 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57436 <nil> <nil>}
	I1210 07:28:04.445114    6044 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-099700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-099700/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-099700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:28:04.618024    6044 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:28:04.618114    6044 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 07:28:04.618178    6044 ubuntu.go:190] setting up certificates
	I1210 07:28:04.618178    6044 provision.go:84] configureAuth start
	I1210 07:28:04.621666    6044 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-099700
	I1210 07:28:04.678581    6044 provision.go:143] copyHostCerts
	I1210 07:28:04.679577    6044 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 07:28:04.679577    6044 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 07:28:04.679577    6044 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 07:28:04.680576    6044 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 07:28:04.680576    6044 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 07:28:04.680576    6044 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 07:28:04.681578    6044 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 07:28:04.681578    6044 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 07:28:04.681578    6044 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 07:28:04.682587    6044 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-099700 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-099700]
	I1210 07:28:04.790586    6044 provision.go:177] copyRemoteCerts
	I1210 07:28:04.794580    6044 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:28:04.798587    6044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:28:04.848589    6044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57436 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-099700\id_rsa Username:docker}
	I1210 07:28:04.969586    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:28:04.996607    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:28:05.027038    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:28:05.055972    6044 provision.go:87] duration metric: took 437.742ms to configureAuth
	I1210 07:28:05.055972    6044 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:28:05.056538    6044 config.go:182] Loaded profile config "no-preload-099700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:28:05.060385    6044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:28:05.118419    6044 main.go:143] libmachine: Using SSH client type: native
	I1210 07:28:05.119409    6044 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57436 <nil> <nil>}
	I1210 07:28:05.119409    6044 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 07:28:05.297577    6044 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 07:28:05.297608    6044 ubuntu.go:71] root file system type: overlay
	I1210 07:28:05.297642    6044 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 07:28:05.300715    6044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:28:05.381986    6044 main.go:143] libmachine: Using SSH client type: native
	I1210 07:28:05.382540    6044 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57436 <nil> <nil>}
	I1210 07:28:05.382540    6044 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 07:28:05.569573    6044 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 07:28:05.574391    6044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:28:05.634686    6044 main.go:143] libmachine: Using SSH client type: native
	I1210 07:28:05.634686    6044 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57436 <nil> <nil>}
	I1210 07:28:05.635680    6044 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 07:28:05.816880    6044 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:28:05.816880    6044 machine.go:97] duration metric: took 4.9766909s to provisionDockerMachine
	I1210 07:28:05.816880    6044 start.go:293] postStartSetup for "no-preload-099700" (driver="docker")
	I1210 07:28:05.816880    6044 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:28:05.821791    6044 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:28:05.825310    6044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:28:05.882098    6044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57436 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-099700\id_rsa Username:docker}
	I1210 07:28:06.014195    6044 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:28:06.022580    6044 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:28:06.022607    6044 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:28:06.022652    6044 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 07:28:06.022859    6044 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 07:28:06.023381    6044 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 07:28:06.028144    6044 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:28:06.043597    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 07:28:06.077954    6044 start.go:296] duration metric: took 261.0692ms for postStartSetup
	I1210 07:28:06.082456    6044 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:28:06.085815    6044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:28:06.141683    6044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57436 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-099700\id_rsa Username:docker}
	I1210 07:28:06.260841    6044 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:28:06.270846    6044 fix.go:56] duration metric: took 7.5472151s for fixHost
	I1210 07:28:06.270846    6044 start.go:83] releasing machines lock for "no-preload-099700", held for 7.5472151s
	I1210 07:28:06.274836    6044 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-099700
	I1210 07:28:06.328852    6044 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 07:28:06.332832    6044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:28:06.332832    6044 ssh_runner.go:195] Run: cat /version.json
	I1210 07:28:06.335840    6044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:28:06.399516    6044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57436 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-099700\id_rsa Username:docker}
	I1210 07:28:06.399516    6044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57436 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-099700\id_rsa Username:docker}
	W1210 07:28:06.514323    6044 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 07:28:06.532716    6044 ssh_runner.go:195] Run: systemctl --version
	I1210 07:28:06.550816    6044 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:28:06.560393    6044 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:28:06.566136    6044 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:28:06.579222    6044 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:28:06.579222    6044 start.go:496] detecting cgroup driver to use...
	I1210 07:28:06.579222    6044 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:28:06.579222    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:28:06.607228    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1210 07:28:06.621336    6044 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 07:28:06.621336    6044 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 07:28:06.627093    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:28:06.642521    6044 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:28:06.645514    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:28:06.663534    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:28:06.680531    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:28:06.701656    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:28:06.732764    6044 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:28:06.755632    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:28:06.780481    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:28:06.804323    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:28:06.823332    6044 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:28:06.838320    6044 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:28:06.854318    6044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:28:07.020242    6044 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:28:07.189355    6044 start.go:496] detecting cgroup driver to use...
	I1210 07:28:07.189355    6044 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:28:07.194371    6044 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 07:28:07.218371    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:28:07.240353    6044 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:28:07.307874    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:28:07.333007    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:28:07.353420    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:28:07.382308    6044 ssh_runner.go:195] Run: which cri-dockerd
	I1210 07:28:07.395151    6044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 07:28:07.410394    6044 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 07:28:07.434917    6044 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 07:28:07.598235    6044 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 07:28:07.762856    6044 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 07:28:07.762856    6044 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 07:28:07.787849    6044 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 07:28:07.809847    6044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:28:07.964775    6044 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 07:28:08.934760    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:28:08.960109    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 07:28:08.983423    6044 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1210 07:28:09.008314    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:28:09.030696    6044 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 07:28:09.193390    6044 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 07:28:09.380967    6044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:28:09.538619    6044 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 07:28:09.574548    6044 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 07:28:09.599392    6044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:28:09.777258    6044 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 07:28:09.895354    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:28:09.914156    6044 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 07:28:09.919137    6044 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 07:28:09.925981    6044 start.go:564] Will wait 60s for crictl version
	I1210 07:28:09.929965    6044 ssh_runner.go:195] Run: which crictl
	I1210 07:28:09.944150    6044 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:28:09.995203    6044 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 07:28:09.998510    6044 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:28:10.049167    6044 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:28:10.102928    6044 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.2 ...
	I1210 07:28:10.107130    6044 cli_runner.go:164] Run: docker exec -t no-preload-099700 dig +short host.docker.internal
	I1210 07:28:10.237158    6044 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 07:28:10.241155    6044 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 07:28:10.248149    6044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:28:10.267150    6044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:28:10.326284    6044 kubeadm.go:884] updating cluster {Name:no-preload-099700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-099700 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:28:10.326284    6044 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 07:28:10.330772    6044 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 07:28:10.360547    6044 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1210 07:28:10.360547    6044 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:28:10.360547    6044 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-rc.1 docker true true} ...
	I1210 07:28:10.360547    6044 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-099700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-099700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:28:10.363543    6044 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 07:28:10.443578    6044 cni.go:84] Creating CNI manager for ""
	I1210 07:28:10.443578    6044 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 07:28:10.443578    6044 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:28:10.443578    6044 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-099700 NodeName:no-preload-099700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:28:10.444576    6044 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-099700"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:28:10.449257    6044 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 07:28:10.466057    6044 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:28:10.477115    6044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:28:10.492449    6044 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1210 07:28:10.512450    6044 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 07:28:10.531463    6044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1210 07:28:10.556452    6044 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:28:10.563447    6044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:28:10.584406    6044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:28:10.757207    6044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:28:10.779235    6044 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700 for IP: 192.168.103.2
	I1210 07:28:10.779235    6044 certs.go:195] generating shared ca certs ...
	I1210 07:28:10.779235    6044 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:28:10.779235    6044 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 07:28:10.780218    6044 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 07:28:10.780218    6044 certs.go:257] generating profile certs ...
	I1210 07:28:10.780218    6044 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\client.key
	I1210 07:28:10.781224    6044 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\apiserver.key.605fe1d0
	I1210 07:28:10.781224    6044 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\proxy-client.key
	I1210 07:28:10.781224    6044 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 07:28:10.782212    6044 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 07:28:10.782212    6044 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 07:28:10.782212    6044 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 07:28:10.782212    6044 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 07:28:10.782212    6044 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 07:28:10.783213    6044 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 07:28:10.784223    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:28:10.813205    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:28:10.840211    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:28:10.869223    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:28:10.901463    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:28:10.932460    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:28:10.961003    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:28:10.986878    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-099700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:28:11.020148    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:28:11.050680    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 07:28:11.076128    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 07:28:11.109322    6044 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:28:11.140585    6044 ssh_runner.go:195] Run: openssl version
	I1210 07:28:11.154280    6044 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 07:28:11.173940    6044 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 07:28:11.195258    6044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 07:28:11.208079    6044 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 07:28:11.212461    6044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 07:28:11.259351    6044 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:28:11.276346    6044 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 07:28:11.295698    6044 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 07:28:11.317049    6044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 07:28:11.325136    6044 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 07:28:11.330045    6044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 07:28:11.381152    6044 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:28:11.402991    6044 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:28:11.427674    6044 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:28:11.446552    6044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:28:11.454620    6044 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:28:11.458572    6044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:28:11.513055    6044 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:28:11.528055    6044 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:28:11.540353    6044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:28:11.591429    6044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:28:11.644472    6044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:28:11.717592    6044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:28:11.768587    6044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:28:11.824072    6044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:28:11.874449    6044 kubeadm.go:401] StartCluster: {Name:no-preload-099700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-099700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:28:11.877943    6044 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 07:28:11.923351    6044 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:28:11.940233    6044 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:28:11.940233    6044 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:28:11.943233    6044 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:28:11.956240    6044 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:28:11.960247    6044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:28:12.013356    6044 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-099700" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:28:12.014169    6044 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-099700" cluster setting kubeconfig missing "no-preload-099700" context setting]
	I1210 07:28:12.014952    6044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:28:12.039222    6044 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:28:12.053225    6044 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1210 07:28:12.053225    6044 kubeadm.go:602] duration metric: took 112.9902ms to restartPrimaryControlPlane
	I1210 07:28:12.053225    6044 kubeadm.go:403] duration metric: took 178.8148ms to StartCluster
	I1210 07:28:12.053225    6044 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:28:12.053225    6044 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:28:12.054223    6044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:28:12.055221    6044 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:28:12.055221    6044 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:28:12.055221    6044 addons.go:70] Setting storage-provisioner=true in profile "no-preload-099700"
	I1210 07:28:12.055221    6044 addons.go:70] Setting dashboard=true in profile "no-preload-099700"
	I1210 07:28:12.055221    6044 addons.go:70] Setting default-storageclass=true in profile "no-preload-099700"
	I1210 07:28:12.055221    6044 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-099700"
	I1210 07:28:12.055221    6044 addons.go:239] Setting addon storage-provisioner=true in "no-preload-099700"
	I1210 07:28:12.055221    6044 addons.go:239] Setting addon dashboard=true in "no-preload-099700"
	I1210 07:28:12.055221    6044 config.go:182] Loaded profile config "no-preload-099700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	W1210 07:28:12.055221    6044 addons.go:248] addon dashboard should already be in state true
	I1210 07:28:12.055221    6044 host.go:66] Checking if "no-preload-099700" exists ...
	I1210 07:28:12.055221    6044 host.go:66] Checking if "no-preload-099700" exists ...
	I1210 07:28:12.061227    6044 out.go:179] * Verifying Kubernetes components...
	I1210 07:28:12.065223    6044 cli_runner.go:164] Run: docker container inspect no-preload-099700 --format={{.State.Status}}
	I1210 07:28:12.065223    6044 cli_runner.go:164] Run: docker container inspect no-preload-099700 --format={{.State.Status}}
	I1210 07:28:12.067231    6044 cli_runner.go:164] Run: docker container inspect no-preload-099700 --format={{.State.Status}}
	I1210 07:28:12.068228    6044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:28:12.121226    6044 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:28:12.122230    6044 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:28:12.123230    6044 addons.go:239] Setting addon default-storageclass=true in "no-preload-099700"
	I1210 07:28:12.124224    6044 host.go:66] Checking if "no-preload-099700" exists ...
	I1210 07:28:12.126225    6044 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:28:12.126225    6044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:28:12.126225    6044 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:28:12.128226    6044 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:28:12.128226    6044 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:28:12.130225    6044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:28:12.132235    6044 cli_runner.go:164] Run: docker container inspect no-preload-099700 --format={{.State.Status}}
	I1210 07:28:12.132235    6044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:28:12.187228    6044 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:28:12.187228    6044 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:28:12.187228    6044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57436 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-099700\id_rsa Username:docker}
	I1210 07:28:12.188232    6044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57436 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-099700\id_rsa Username:docker}
	I1210 07:28:12.190228    6044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:28:12.242226    6044 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57436 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-099700\id_rsa Username:docker}
	I1210 07:28:12.273239    6044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:28:12.358768    6044 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-099700
	I1210 07:28:12.369486    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:28:12.375799    6044 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:28:12.375799    6044 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:28:12.404556    6044 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:28:12.404556    6044 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:28:12.405542    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:28:12.426537    6044 node_ready.go:35] waiting up to 6m0s for node "no-preload-099700" to be "Ready" ...
	I1210 07:28:12.481560    6044 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:28:12.481560    6044 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:28:12.569985    6044 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:28:12.569985    6044 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1210 07:28:12.587151    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:12.587151    6044 retry.go:31] will retry after 356.203858ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:12.597730    6044 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:28:12.597730    6044 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1210 07:28:12.659793    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:12.659793    6044 retry.go:31] will retry after 372.529216ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:12.670815    6044 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:28:12.670815    6044 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 07:28:12.692787    6044 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:28:12.692787    6044 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1210 07:28:12.717099    6044 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:28:12.718096    6044 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 07:28:12.747640    6044 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:28:12.747640    6044 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:28:12.770575    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:28:12.857712    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:12.857712    6044 retry.go:31] will retry after 262.358633ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:12.948335    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:28:13.037161    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:28:13.044570    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:13.044570    6044 retry.go:31] will retry after 373.481195ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:28:13.116594    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:13.116594    6044 retry.go:31] will retry after 449.903508ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:13.123576    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:28:13.205575    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:13.205575    6044 retry.go:31] will retry after 502.703262ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:13.423587    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:28:13.503716    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:13.503716    6044 retry.go:31] will retry after 834.165246ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:13.570686    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:28:13.667619    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:13.667619    6044 retry.go:31] will retry after 556.676276ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:13.713507    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:28:13.801618    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:13.801618    6044 retry.go:31] will retry after 390.939533ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:14.197876    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:28:14.230686    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:28:14.297856    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:14.297856    6044 retry.go:31] will retry after 910.356645ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:28:14.316855    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:14.317854    6044 retry.go:31] will retry after 811.816456ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:14.341851    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:28:14.425242    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:14.425322    6044 retry.go:31] will retry after 1.044125131s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:15.134387    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:28:15.212305    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:28:15.221304    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:15.221304    6044 retry.go:31] will retry after 1.007506766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:28:15.299309    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:15.299362    6044 retry.go:31] will retry after 1.218109025s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:15.473419    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:28:15.560425    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:15.560425    6044 retry.go:31] will retry after 1.80281162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:16.233691    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:28:16.326189    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:16.326758    6044 retry.go:31] will retry after 1.740210597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:16.521778    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:28:16.617802    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:16.617802    6044 retry.go:31] will retry after 1.60547536s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:17.368100    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:28:17.463704    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:17.463785    6044 retry.go:31] will retry after 1.900920253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:18.072214    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:28:18.160938    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:18.161028    6044 retry.go:31] will retry after 2.784207214s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:18.227866    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:28:18.335160    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:18.335160    6044 retry.go:31] will retry after 2.631412724s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:19.370281    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:28:19.466075    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:19.466075    6044 retry.go:31] will retry after 2.980311897s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:20.948950    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:28:20.972079    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:28:21.056632    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:21.056632    6044 retry.go:31] will retry after 4.262280061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:28:21.068996    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:21.068996    6044 retry.go:31] will retry after 3.639074627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:22.451377    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:28:22.461878    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:28:22.535196    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:22.535196    6044 retry.go:31] will retry after 3.845994478s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:24.713230    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:28:24.829424    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:24.829424    6044 retry.go:31] will retry after 7.942970039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:25.325706    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:28:25.412261    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:25.412261    6044 retry.go:31] will retry after 6.384318043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:26.413529    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:28:26.745629    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:26.745629    6044 retry.go:31] will retry after 3.292706649s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:30.048577    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:28:30.162580    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:30.162580    6044 retry.go:31] will retry after 9.076667748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:31.804669    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:28:31.939668    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:31.939668    6044 retry.go:31] will retry after 9.577940126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:28:32.575784    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:28:32.778892    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:28:32.905890    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:32.905890    6044 retry.go:31] will retry after 13.11482653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:39.244781    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:28:39.332552    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:39.332552    6044 retry.go:31] will retry after 10.498880849s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:41.522295    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:28:41.607883    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:41.607883    6044 retry.go:31] will retry after 16.368604263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:28:42.610426    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:28:46.025472    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:28:46.109449    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:46.109449    6044 retry.go:31] will retry after 14.366239815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:49.836325    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:28:49.920768    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:49.920768    6044 retry.go:31] will retry after 31.453636071s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:28:52.644776    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:28:57.982170    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:28:58.077050    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:28:58.077050    6044 retry.go:31] will retry after 12.433960124s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:00.482027    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:29:00.594031    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:00.594031    6044 retry.go:31] will retry after 21.371573294s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:29:02.683556    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:29:10.517815    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:29:10.616137    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:10.616137    6044 retry.go:31] will retry after 17.854225122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:29:12.716974    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:29:21.380009    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:29:21.479004    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:21.479004    6044 retry.go:31] will retry after 33.119844872s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:21.970394    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:29:22.051381    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:22.051381    6044 retry.go:31] will retry after 38.503650295s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:29:22.753292    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:29:28.475500    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:29:28.560492    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:29:28.560492    6044 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1210 07:29:32.962733    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:29:42.997203    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:29:53.034041    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:29:54.603046    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:29:54.691431    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:29:54.691431    6044 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:30:00.562906    6044 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:30:00.677220    6044 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:30:00.677220    6044 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:30:00.682219    6044 out.go:179] * Enabled addons: 
	I1210 07:30:00.685222    6044 addons.go:530] duration metric: took 1m48.6273132s for enable addons: enabled=[]
	W1210 07:30:03.067501    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:30:13.104575    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:30:23.138490    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:30:33.172938    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:30:43.206101    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:30:53.242204    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:31:03.274367    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:31:13.308733    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:31:23.340249    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:31:33.500806    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:31:43.531620    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:31:53.564945    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:32:03.602967    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:32:13.641314    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:32:23.679657    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:32:33.712552    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:32:43.745046    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:32:53.781958    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:33:03.818627    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:33:13.858746    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:33:23.891822    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:33:33.924414    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:33:43.958857    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:33:53.996838    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:34:04.034666    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:34:12.432519    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1210 07:34:12.432519    6044 node_ready.go:38] duration metric: took 6m0.0003472s for node "no-preload-099700" to be "Ready" ...
	I1210 07:34:12.435520    6044 out.go:203] 
	W1210 07:34:12.437521    6044 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:34:12.437521    6044 out.go:285] * 
	* 
	W1210 07:34:12.439520    6044 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:34:12.443519    6044 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p no-preload-099700 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-rc.1": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-099700
helpers_test.go:244: (dbg) docker inspect no-preload-099700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11",
	        "Created": "2025-12-10T07:17:13.908925425Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 451860,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:27:59.880122532Z",
	            "FinishedAt": "2025-12-10T07:27:56.24098096Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/hostname",
	        "HostsPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/hosts",
	        "LogPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11-json.log",
	        "Name": "/no-preload-099700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-099700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-099700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-099700",
	                "Source": "/var/lib/docker/volumes/no-preload-099700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-099700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-099700",
	                "name.minikube.sigs.k8s.io": "no-preload-099700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36b12f7c82c546811ea16d124f8782cdd27350c19ac1d3ab3f547c6a6d9a2eab",
	            "SandboxKey": "/var/run/docker/netns/36b12f7c82c5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57436"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57439"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57440"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-099700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "19fb5b7ebc44993ca33ebb33ab9b189e482cb385e465c509a613326e2c10eb7e",
	                    "EndpointID": "5663a1495caac3a8be49ce34bbbb4f5a9e88b108cb75e92d2208550cc897ee2e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-099700",
	                        "a93123bad589"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-099700 -n no-preload-099700
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-099700 -n no-preload-099700: exit status 2 (603.0637ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-099700 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-099700 logs -n 25: (1.3061913s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                          ARGS                                          │        PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/kube-flannel/cni-conf.json                      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status kubelet --all --full --no-pager         │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat kubelet --no-pager                         │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo journalctl -xeu kubelet --all --full --no-pager          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/kubernetes/kubelet.conf                         │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /var/lib/kubelet/config.yaml                         │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status docker --all --full --no-pager          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat docker --no-pager                          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/docker/daemon.json                              │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo docker system info                                       │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status cri-docker --all --full --no-pager      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat cri-docker --no-pager                      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /usr/lib/systemd/system/cri-docker.service           │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cri-dockerd --version                                    │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status containerd --all --full --no-pager      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat containerd --no-pager                      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /lib/systemd/system/containerd.service               │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/containerd/config.toml                          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo containerd config dump                                   │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status crio --all --full --no-pager            │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat crio --no-pager                            │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo crio config                                              │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ delete  │ -p custom-flannel-648600                                                               │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:31:27
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:31:27.429465    2240 out.go:360] Setting OutFile to fd 1904 ...
	I1210 07:31:27.483636    2240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:31:27.483636    2240 out.go:374] Setting ErrFile to fd 1148...
	I1210 07:31:27.483636    2240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:31:27.498633    2240 out.go:368] Setting JSON to false
	I1210 07:31:27.500624    2240 start.go:133] hostinfo: {"hostname":"minikube4","uptime":10819,"bootTime":1765341068,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 07:31:27.500624    2240 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 07:31:27.505874    2240 out.go:179] * [custom-flannel-648600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 07:31:27.510785    2240 notify.go:221] Checking for updates...
	I1210 07:31:27.513604    2240 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:31:27.516776    2240 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:31:27.521423    2240 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 07:31:27.524646    2240 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:31:27.526628    2240 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1210 07:31:23.340249    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:31:27.530138    2240 config.go:182] Loaded profile config "false-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:31:27.530637    2240 config.go:182] Loaded profile config "newest-cni-525200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:31:27.530927    2240 config.go:182] Loaded profile config "no-preload-099700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:31:27.531072    2240 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:31:27.674116    2240 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 07:31:27.679999    2240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:31:27.935225    2240 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:31:27.906881904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:31:27.940210    2240 out.go:179] * Using the docker driver based on user configuration
	I1210 07:31:27.947210    2240 start.go:309] selected driver: docker
	I1210 07:31:27.947210    2240 start.go:927] validating driver "docker" against <nil>
	I1210 07:31:27.947210    2240 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:31:28.038927    2240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:31:28.306393    2240 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:31:28.276193336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:31:28.307456    2240 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:31:28.308474    2240 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:31:28.311999    2240 out.go:179] * Using Docker Desktop driver with root privileges
	I1210 07:31:28.314563    2240 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1210 07:31:28.314921    2240 start_flags.go:336] Found "testdata\\kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1210 07:31:28.314921    2240 start.go:353] cluster config:
	{Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:31:28.317704    2240 out.go:179] * Starting "custom-flannel-648600" primary control-plane node in "custom-flannel-648600" cluster
	I1210 07:31:28.318967    2240 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 07:31:28.320981    2240 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:31:23.421229    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:23.421229    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:23.460218    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:23.460218    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:23.544413    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:23.535582    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.536730    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.537358    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.539749    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.540912    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:23.535582    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.536730    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.537358    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.539749    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.540912    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:26.050161    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:26.077105    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:26.111827    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.111827    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:26.116713    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:26.160114    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.160114    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:26.163744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:26.201139    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.201139    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:26.204831    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:26.240411    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.240462    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:26.244533    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:26.280463    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.280463    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:26.285443    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:26.317450    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.317450    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:26.320454    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:26.356058    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.356058    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:26.360642    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:26.406955    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.406994    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:26.407032    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:26.407032    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:26.486801    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:26.486845    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:26.525844    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:26.525844    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:26.629730    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:26.619679    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.620896    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.621633    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623054    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623677    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:26.619679    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.620896    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.621633    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623054    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623677    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:26.630733    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:26.630733    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:26.786973    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:26.786973    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:28.323967    2240 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:31:28.323967    2240 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 07:31:28.370604    2240 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:31:28.410253    2240 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:31:28.410253    2240 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:31:28.586590    2240 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:31:28.586590    2240 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\config.json ...
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:31:28.586590    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\config.json: {Name:mk37135597d0b3e0094e1cb1b5ff50d942db06b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:31:28.587928    2240 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:31:28.587928    2240 start.go:360] acquireMachinesLock for custom-flannel-648600: {Name:mk4a3a34c58cff29c46217d57a91ed79fc9f522b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:28.588459    2240 start.go:364] duration metric: took 531.3µs to acquireMachinesLock for "custom-flannel-648600"
	I1210 07:31:28.588615    2240 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false D
isableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:31:28.588742    2240 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:31:28.592548    2240 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:31:28.593172    2240 start.go:159] libmachine.API.Create for "custom-flannel-648600" (driver="docker")
	I1210 07:31:28.593172    2240 client.go:173] LocalClient.Create starting
	I1210 07:31:28.593172    2240 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1210 07:31:28.594367    2240 main.go:143] libmachine: Decoding PEM data...
	I1210 07:31:28.594367    2240 main.go:143] libmachine: Parsing certificate...
	I1210 07:31:28.594367    2240 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1210 07:31:28.595268    2240 main.go:143] libmachine: Decoding PEM data...
	I1210 07:31:28.595268    2240 main.go:143] libmachine: Parsing certificate...
	I1210 07:31:28.601656    2240 cli_runner.go:164] Run: docker network inspect custom-flannel-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:31:28.702719    2240 cli_runner.go:211] docker network inspect custom-flannel-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:31:28.710721    2240 network_create.go:284] running [docker network inspect custom-flannel-648600] to gather additional debugging logs...
	I1210 07:31:28.710721    2240 cli_runner.go:164] Run: docker network inspect custom-flannel-648600
	W1210 07:31:28.938963    2240 cli_runner.go:211] docker network inspect custom-flannel-648600 returned with exit code 1
	I1210 07:31:28.938963    2240 network_create.go:287] error running [docker network inspect custom-flannel-648600]: docker network inspect custom-flannel-648600: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-648600 not found
	I1210 07:31:28.938963    2240 network_create.go:289] output of [docker network inspect custom-flannel-648600]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-648600 not found
	
	** /stderr **
	I1210 07:31:28.945949    2240 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:31:29.091971    2240 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:29.381586    2240 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:29.465291    2240 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016a8ae0}
	I1210 07:31:29.465291    2240 network_create.go:124] attempt to create docker network custom-flannel-648600 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1210 07:31:29.470056    2240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600
	W1210 07:31:30.046347    2240 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600 returned with exit code 1
	W1210 07:31:30.046347    2240 network_create.go:149] failed to create docker network custom-flannel-648600 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1210 07:31:30.046347    2240 network_create.go:116] failed to create docker network custom-flannel-648600 192.168.67.0/24, will retry: subnet is taken
	I1210 07:31:30.140283    2240 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:30.262644    2240 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017e1d40}
	I1210 07:31:30.262866    2240 network_create.go:124] attempt to create docker network custom-flannel-648600 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 07:31:30.267646    2240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600
	W1210 07:31:30.581811    2240 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600 returned with exit code 1
	W1210 07:31:30.581811    2240 network_create.go:149] failed to create docker network custom-flannel-648600 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1210 07:31:30.581811    2240 network_create.go:116] failed to create docker network custom-flannel-648600 192.168.76.0/24, will retry: subnet is taken
	I1210 07:31:30.621040    2240 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:30.648052    2240 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cde450}
	I1210 07:31:30.648052    2240 network_create.go:124] attempt to create docker network custom-flannel-648600 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1210 07:31:30.656045    2240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600
	I1210 07:31:30.870907    2240 network_create.go:108] docker network custom-flannel-648600 192.168.85.0/24 created
	I1210 07:31:30.870907    2240 kic.go:121] calculated static IP "192.168.85.2" for the "custom-flannel-648600" container
	I1210 07:31:30.881906    2240 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:31:31.006456    2240 cli_runner.go:164] Run: docker volume create custom-flannel-648600 --label name.minikube.sigs.k8s.io=custom-flannel-648600 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:31:31.098467    2240 oci.go:103] Successfully created a docker volume custom-flannel-648600
	I1210 07:31:31.104469    2240 cli_runner.go:164] Run: docker run --rm --name custom-flannel-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-648600 --entrypoint /usr/bin/test -v custom-flannel-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 07:31:31.792496    2240 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.792496    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1210 07:31:31.792496    2240 cache.go:107] acquiring lock: {Name:mk712909f8cebe825fb8a435a44c90c8d2ca7c32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.792496    2240 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.2058554s
	I1210 07:31:31.792496    2240 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1210 07:31:31.792496    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 exists
	I1210 07:31:31.792496    2240 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.34.3" took 3.2053301s
	I1210 07:31:31.792496    2240 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 succeeded
	I1210 07:31:31.794500    2240 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.794500    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1210 07:31:31.794500    2240 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.2078599s
	I1210 07:31:31.795487    2240 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1210 07:31:31.796493    2240 cache.go:107] acquiring lock: {Name:mk4347e5873954a8abe84535c3dbf0018de433f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.796493    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 exists
	I1210 07:31:31.796493    2240 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.34.3" took 3.2098526s
	I1210 07:31:31.796493    2240 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 succeeded
	I1210 07:31:31.809204    2240 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.809204    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1210 07:31:31.809204    2240 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.2225634s
	I1210 07:31:31.809728    2240 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1210 07:31:31.821783    2240 cache.go:107] acquiring lock: {Name:mk2c17fa70a505b9214cef4e32cab64efc0821c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.822582    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 exists
	I1210 07:31:31.822582    2240 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.34.3" took 3.2354164s
	I1210 07:31:31.822582    2240 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 succeeded
	I1210 07:31:31.828690    2240 cache.go:107] acquiring lock: {Name:mk6b392ee3857c8c549222e5d4bc0d459f0d6374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.828690    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 exists
	I1210 07:31:31.828690    2240 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.12.1" took 3.2420491s
	I1210 07:31:31.828690    2240 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 succeeded
	I1210 07:31:31.868175    2240 cache.go:107] acquiring lock: {Name:mkc3ba12d2dc6e754bbb22b72e061ebdaf85fee6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.869189    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 exists
	I1210 07:31:31.869189    2240 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.34.3" took 3.2820228s
	I1210 07:31:31.869189    2240 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 succeeded
	I1210 07:31:31.869189    2240 cache.go:87] Successfully saved all images to host disk.
	I1210 07:31:29.397246    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:29.477876    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:29.605797    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.605797    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:29.612110    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:29.728807    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.728807    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:29.734404    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:29.836328    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.836328    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:29.841346    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:29.932721    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.933712    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:29.938725    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:30.029301    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.029301    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:30.034503    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:30.132157    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.132157    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:30.137284    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:30.276443    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.276443    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:30.284280    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:30.440215    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.440215    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:30.440215    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:30.440215    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:30.586863    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:30.586863    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:30.654056    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:30.654056    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:30.825025    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:30.810195    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.812029    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.813542    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.815272    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.818214    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:30.810195    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.812029    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.813542    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.815272    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.818214    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:30.825083    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:30.825083    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:30.883913    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:30.883913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:32.772569    2240 cli_runner.go:217] Completed: docker run --rm --name custom-flannel-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-648600 --entrypoint /usr/bin/test -v custom-flannel-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.6680738s)
	I1210 07:31:32.772569    2240 oci.go:107] Successfully prepared a docker volume custom-flannel-648600
	I1210 07:31:32.772569    2240 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:31:32.777565    2240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:31:33.023291    2240 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:31:33.001747684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:31:33.027286    2240 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:31:33.264619    2240 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-648600 --name custom-flannel-648600 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-648600 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-648600 --network custom-flannel-648600 --ip 192.168.85.2 --volume custom-flannel-648600:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 07:31:34.003194    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Running}}
	I1210 07:31:34.069196    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:31:34.137196    2240 cli_runner.go:164] Run: docker exec custom-flannel-648600 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:31:34.255530    2240 oci.go:144] the created container "custom-flannel-648600" has a running status.
	I1210 07:31:34.255530    2240 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa...
	I1210 07:31:34.371827    2240 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:31:34.454671    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:31:34.514682    2240 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:31:34.514682    2240 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-648600 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:31:34.665673    2240 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa...
	I1210 07:31:37.044619    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:31:37.095607    2240 machine.go:94] provisionDockerMachine start ...
	I1210 07:31:37.098607    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:37.155601    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:37.171620    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:37.171620    2240 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:31:37.347331    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-648600
	
	I1210 07:31:37.347331    2240 ubuntu.go:182] provisioning hostname "custom-flannel-648600"
	I1210 07:31:37.350327    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:37.408671    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:37.409222    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:37.409222    2240 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-648600 && echo "custom-flannel-648600" | sudo tee /etc/hostname
	W1210 07:31:33.500806    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:31:33.522798    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:33.542801    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:33.574796    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.574796    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:33.577799    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:33.609805    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.609805    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:33.613806    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:33.647528    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.647528    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:33.650525    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:33.682527    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.683531    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:33.686536    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:33.715528    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.715528    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:33.718520    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:33.752522    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.752522    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:33.755526    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:33.789961    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.789961    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:33.794804    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:33.824805    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.824805    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:33.824805    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:33.824805    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:33.908771    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:33.908771    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:33.958763    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:33.958763    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:34.080194    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:34.067865    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.069363    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.070689    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.072220    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.073030    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:34.067865    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.069363    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.070689    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.072220    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.073030    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:34.080194    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:34.080194    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:34.114208    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:34.114208    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:36.683658    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:36.704830    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:36.739690    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.739690    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:36.742694    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:36.772249    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.772249    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:36.776265    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:36.812803    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.812803    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:36.816811    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:36.849259    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.849259    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:36.852518    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:36.890605    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.890605    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:36.895610    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:36.937605    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.937605    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:36.942601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:36.979599    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.979599    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:36.984601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:37.022606    1436 logs.go:282] 0 containers: []
	W1210 07:31:37.022606    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:37.022606    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:37.022606    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:37.086612    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:37.086612    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:37.128602    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:37.128602    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:37.225605    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:37.215773    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.216591    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.218566    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.219365    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.221626    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:37.215773    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.216591    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.218566    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.219365    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.221626    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:37.225605    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:37.225605    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:37.254615    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:37.254615    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:37.617301    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-648600
	
	I1210 07:31:37.621329    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:37.680493    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:37.681514    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:37.681514    2240 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-648600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-648600/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-648600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:31:37.850452    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:31:37.850452    2240 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 07:31:37.850452    2240 ubuntu.go:190] setting up certificates
	I1210 07:31:37.850452    2240 provision.go:84] configureAuth start
	I1210 07:31:37.855263    2240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-648600
	I1210 07:31:37.926854    2240 provision.go:143] copyHostCerts
	I1210 07:31:37.927569    2240 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 07:31:37.927608    2240 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 07:31:37.928059    2240 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 07:31:37.928961    2240 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 07:31:37.928961    2240 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 07:31:37.928961    2240 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 07:31:37.930358    2240 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 07:31:37.930390    2240 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 07:31:37.930744    2240 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 07:31:37.931754    2240 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.custom-flannel-648600 san=[127.0.0.1 192.168.85.2 custom-flannel-648600 localhost minikube]
	I1210 07:31:38.038131    2240 provision.go:177] copyRemoteCerts
	I1210 07:31:38.042277    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:31:38.045314    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.098793    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:38.243502    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:31:38.284050    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I1210 07:31:38.320436    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:31:38.351829    2240 provision.go:87] duration metric: took 501.3694ms to configureAuth
	I1210 07:31:38.351829    2240 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:31:38.352840    2240 config.go:182] Loaded profile config "custom-flannel-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:31:38.355824    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.405824    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:38.405824    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:38.405824    2240 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 07:31:38.582107    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 07:31:38.582107    2240 ubuntu.go:71] root file system type: overlay
	I1210 07:31:38.582107    2240 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 07:31:38.585874    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.646407    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:38.646407    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:38.646407    2240 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 07:31:38.847766    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 07:31:38.852241    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.938899    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:38.938899    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:38.938899    2240 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 07:31:40.711527    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-10 07:31:38.832035101 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1210 07:31:40.711665    2240 machine.go:97] duration metric: took 3.616002s to provisionDockerMachine
	I1210 07:31:40.711665    2240 client.go:176] duration metric: took 12.1183047s to LocalClient.Create
	I1210 07:31:40.711665    2240 start.go:167] duration metric: took 12.1183047s to libmachine.API.Create "custom-flannel-648600"
	I1210 07:31:40.711665    2240 start.go:293] postStartSetup for "custom-flannel-648600" (driver="docker")
	I1210 07:31:40.711665    2240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:31:40.715645    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:31:40.718723    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:40.776513    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:40.917451    2240 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:31:40.923444    2240 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:31:40.923444    2240 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:31:40.923444    2240 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 07:31:40.924452    2240 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 07:31:40.924452    2240 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 07:31:40.929458    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:31:40.942452    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 07:31:40.977491    2240 start.go:296] duration metric: took 265.8211ms for postStartSetup
	I1210 07:31:40.981481    2240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-648600
	I1210 07:31:41.034489    2240 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\config.json ...
	I1210 07:31:41.039496    2240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:31:41.043532    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:41.111672    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:41.255080    2240 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:31:41.269938    2240 start.go:128] duration metric: took 12.6809984s to createHost
	I1210 07:31:41.269938    2240 start.go:83] releasing machines lock for "custom-flannel-648600", held for 12.6812262s
	I1210 07:31:41.273664    2240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-648600
	I1210 07:31:41.324666    2240 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 07:31:41.329678    2240 ssh_runner.go:195] Run: cat /version.json
	I1210 07:31:41.329678    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:41.334670    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:41.381680    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:41.381680    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	W1210 07:31:41.497715    2240 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 07:31:41.501431    2240 ssh_runner.go:195] Run: systemctl --version
	I1210 07:31:41.518880    2240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:31:41.528176    2240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:31:41.531184    2240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:31:41.579185    2240 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 07:31:41.579185    2240 start.go:496] detecting cgroup driver to use...
	I1210 07:31:41.579185    2240 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:31:41.579185    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1210 07:31:41.596178    2240 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 07:31:41.596178    2240 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 07:31:41.606178    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:31:41.626187    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:31:41.641198    2240 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:31:41.645182    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:31:41.668187    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:31:41.687179    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:31:41.706179    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:31:41.724180    2240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:31:41.742180    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:31:41.759185    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:31:41.778184    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:31:41.795180    2240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:31:41.811185    2240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:31:41.828187    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:41.983806    2240 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:31:42.163822    2240 start.go:496] detecting cgroup driver to use...
	I1210 07:31:42.163822    2240 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:31:42.167818    2240 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 07:31:42.193819    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:31:42.216825    2240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:31:42.280833    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:31:42.301820    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:31:42.320823    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:31:42.345832    2240 ssh_runner.go:195] Run: which cri-dockerd
	I1210 07:31:42.358831    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 07:31:42.373835    2240 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 07:31:42.401822    2240 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 07:31:39.808959    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:39.828946    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:39.859949    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.859949    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:39.862944    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:39.896961    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.896961    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:39.901952    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:39.936950    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.936950    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:39.939955    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:39.969949    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.969949    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:39.972954    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:40.002949    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.002949    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:40.006946    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:40.036957    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.036957    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:40.039947    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:40.098959    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.098959    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:40.102955    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:40.149957    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.149957    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:40.149957    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:40.149957    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:40.191850    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:40.192845    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:40.293665    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:40.277190    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283148    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283982    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.286428    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.287149    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:40.277190    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283148    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283982    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.286428    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.287149    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:40.293665    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:40.293665    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:40.325883    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:40.325883    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:40.379885    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:40.379885    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:42.947835    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:42.966833    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:43.000857    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.000857    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:43.003835    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:43.034830    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.034830    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:43.037843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:43.069836    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.069836    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:43.073842    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:43.105424    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.105465    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:43.109492    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:43.143411    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.143411    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:43.147409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:43.179168    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.179168    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:43.183167    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:43.211281    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.211281    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:43.214141    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:43.248141    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.248141    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:43.248141    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:43.248141    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:43.314876    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:43.314876    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:43.357233    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:43.357233    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 07:31:42.551686    2240 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 07:31:42.712827    2240 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 07:31:42.712827    2240 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 07:31:42.735824    2240 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 07:31:42.756828    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:42.906845    2240 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 07:31:43.937123    2240 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0302614s)
	I1210 07:31:43.944887    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:31:43.971819    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 07:31:43.996364    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:31:44.030377    2240 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 07:31:44.173489    2240 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 07:31:44.332105    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:44.483148    2240 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 07:31:44.509404    2240 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 07:31:44.533765    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:44.690011    2240 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 07:31:44.790147    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:31:44.810716    2240 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 07:31:44.813714    2240 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 07:31:44.820719    2240 start.go:564] Will wait 60s for crictl version
	I1210 07:31:44.824717    2240 ssh_runner.go:195] Run: which crictl
	I1210 07:31:44.835701    2240 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:31:44.880457    2240 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 07:31:44.883920    2240 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:31:44.928460    2240 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:31:45.060104    2240 out.go:252] * Preparing Kubernetes v1.34.3 on Docker 29.1.2 ...
	I1210 07:31:45.062900    2240 cli_runner.go:164] Run: docker exec -t custom-flannel-648600 dig +short host.docker.internal
	I1210 07:31:45.193754    2240 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 07:31:45.197851    2240 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 07:31:45.204880    2240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:31:45.225085    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:45.282870    2240 kubeadm.go:884] updating cluster {Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:31:45.283875    2240 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:31:45.286873    2240 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 07:31:45.317078    2240 docker.go:691] Got preloaded images: 
	I1210 07:31:45.317078    2240 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.3 wasn't preloaded
	I1210 07:31:45.317078    2240 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.3 registry.k8s.io/kube-controller-manager:v1.34.3 registry.k8s.io/kube-scheduler:v1.34.3 registry.k8s.io/kube-proxy:v1.34.3 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 07:31:45.330428    2240 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:45.336331    2240 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.341435    2240 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:45.341435    2240 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.347452    2240 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.347452    2240 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.352434    2240 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:45.355426    2240 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.358455    2240 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:45.361429    2240 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.365434    2240 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 07:31:45.366439    2240 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:45.369440    2240 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:45.370428    2240 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:45.374431    2240 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 07:31:45.379430    2240 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	W1210 07:31:45.411422    2240 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.466193    2240 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.518621    2240 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.573883    2240 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.622874    2240 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.672905    2240 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.723034    2240 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.771034    2240 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.12.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 07:31:45.842424    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.842823    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.869734    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.884370    2240 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.3" does not exist at hash "aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c" in container runtime
	I1210 07:31:45.884370    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:31:45.884370    2240 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.3" needs transfer: "registry.k8s.io/kube-proxy:v1.34.3" does not exist at hash "36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691" in container runtime
	I1210 07:31:45.884370    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:31:45.884370    2240 docker.go:338] Removing image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.884370    2240 docker.go:338] Removing image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.890739    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.890951    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.897121    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:45.901151    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:45.922366    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 07:31:45.956325    2240 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.3" does not exist at hash "5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942" in container runtime
	I1210 07:31:45.956325    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:31:45.956325    2240 docker.go:338] Removing image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.961320    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.992754    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:46.045432    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:31:46.045432    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:31:46.053082    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:31:46.053082    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:31:46.059786    2240 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.3" does not exist at hash "aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78" in container runtime
	I1210 07:31:46.059786    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:31:46.059786    2240 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 07:31:46.060783    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:31:46.060783    2240 docker.go:338] Removing image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:46.060783    2240 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:46.065694    2240 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 07:31:46.065694    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:31:46.065694    2240 docker.go:338] Removing image: registry.k8s.io/pause:3.10.1
	I1210 07:31:46.067530    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:46.067911    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:46.068609    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:31:46.070610    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1210 07:31:46.073597    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:46.074603    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:31:46.146816    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.3': No such file or directory
	I1210 07:31:46.146816    2240 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1210 07:31:46.146816    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.3': No such file or directory
	I1210 07:31:46.146816    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:31:46.147805    2240 docker.go:338] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:46.147805    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 --> /var/lib/minikube/images/kube-apiserver_v1.34.3 (27075584 bytes)
	I1210 07:31:46.147805    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 --> /var/lib/minikube/images/kube-proxy_v1.34.3 (25966592 bytes)
	I1210 07:31:46.151807    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:46.255114    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:31:46.255114    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:31:46.261151    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:31:46.262119    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:31:46.272115    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:31:46.272115    2240 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 07:31:46.272115    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:31:46.272115    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.3': No such file or directory
	I1210 07:31:46.272115    2240 docker.go:338] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:46.272115    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 --> /var/lib/minikube/images/kube-controller-manager_v1.34.3 (22830080 bytes)
	I1210 07:31:46.277116    2240 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:46.278121    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 07:31:46.289109    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:31:46.293116    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:31:46.475787    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.3': No such file or directory
	I1210 07:31:46.475787    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 07:31:46.475787    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 07:31:46.475787    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 --> /var/lib/minikube/images/kube-scheduler_v1.34.3 (17393664 bytes)
	I1210 07:31:46.476808    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 07:31:46.476808    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:31:46.476808    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 07:31:46.476808    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1210 07:31:46.476808    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1210 07:31:46.481795    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:31:46.504793    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 07:31:46.504793    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 07:31:46.672791    2240 docker.go:305] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 07:31:46.672791    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1210 07:31:47.172597    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 from cache
	I1210 07:31:47.208589    2240 docker.go:305] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:31:47.208589    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	W1210 07:31:43.531620    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:31:43.451546    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:43.441909    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443033    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443856    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.446062    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.447664    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:43.441909    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443033    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443856    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.446062    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.447664    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:43.452560    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:43.452560    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:43.479539    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:43.479539    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:46.056731    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:46.081601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:46.111531    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.111531    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:46.116512    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:46.149808    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.149808    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:46.155807    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:46.190791    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.190791    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:46.193789    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:46.232109    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.232109    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:46.235109    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:46.269122    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.269122    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:46.273122    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:46.302130    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.302130    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:46.306119    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:46.338110    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.338110    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:46.341114    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:46.370305    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.370305    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:46.370305    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:46.370305    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:46.438787    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:46.438787    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:46.605791    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:46.605791    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:46.756762    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:46.747167    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.748310    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.749473    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.750856    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.751642    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:46.747167    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.748310    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.749473    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.750856    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.751642    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:46.756762    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:46.756762    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:46.793764    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:46.793764    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:48.287161    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load": (1.0785558s)
	I1210 07:31:48.287161    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I1210 07:31:48.287161    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:31:48.287161    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.34.3 | docker load"
	I1210 07:31:51.130300    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.34.3 | docker load": (2.8430943s)
	I1210 07:31:51.130300    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 from cache
	I1210 07:31:51.130300    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:31:51.130300    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.34.3 | docker load"
	I1210 07:31:52.383759    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.34.3 | docker load": (1.2534401s)
	I1210 07:31:52.383759    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 from cache
	I1210 07:31:52.383759    2240 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:31:52.383759    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	I1210 07:31:49.381174    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:49.403703    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:49.436264    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.436317    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:49.440617    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:49.468917    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.468982    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:49.472677    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:49.499977    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.499977    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:49.504116    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:49.536309    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.536350    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:49.540463    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:49.568274    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.568274    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:49.572177    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:49.600130    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.600130    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:49.604000    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:49.632645    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.632645    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:49.636092    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:49.667017    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.667017    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:49.667017    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:49.667017    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:49.705515    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:49.705515    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:49.790780    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:49.782366    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.783658    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.784961    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.786128    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.787161    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:49.782366    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.783658    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.784961    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.786128    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.787161    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:49.790780    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:49.790780    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:49.817781    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:49.817781    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:49.871600    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:49.871674    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:52.448511    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:52.475325    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:52.506360    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.506360    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:52.510172    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:52.540147    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.540147    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:52.544437    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:52.575774    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.575774    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:52.579336    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:52.610061    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.610061    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:52.613342    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:52.642765    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.642765    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:52.649215    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:52.678701    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.678701    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:52.682526    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:52.710203    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.710203    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:52.715870    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:52.745326    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.745351    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:52.745351    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:52.745397    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:52.811401    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:52.811401    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:52.853138    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:52.853138    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:52.968335    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:52.959589    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.960835    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.961833    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.962872    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.963541    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:52.959589    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.960835    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.961833    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.962872    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.963541    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:52.968335    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:52.968335    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:52.995279    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:52.995802    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:55.245680    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (2.8618761s)
	I1210 07:31:55.245680    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1210 07:31:55.246466    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:31:55.246522    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.34.3 | docker load"
	I1210 07:31:56.790187    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.34.3 | docker load": (1.5436405s)
	I1210 07:31:56.790187    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 from cache
	I1210 07:31:56.790187    2240 docker.go:305] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:31:56.790187    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.12.1 | docker load"
	W1210 07:31:53.564945    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:31:55.548093    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:55.571449    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:55.603901    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.603970    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:55.607695    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:55.639065    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.639065    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:55.643536    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:55.671930    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.671930    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:55.675998    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:55.704460    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.704460    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:55.708947    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:55.739257    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.739257    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:55.742852    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:55.772295    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.772344    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:55.776423    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:55.803812    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.803812    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:55.809849    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:55.841586    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.841647    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:55.841647    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:55.841647    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:55.916368    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:55.916368    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:55.958653    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:55.958653    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:56.055702    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:56.041972    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.043936    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.045926    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.047763    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.048444    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:56.041972    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.043936    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.045926    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.047763    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.048444    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:56.055702    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:56.055702    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:56.084883    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:56.084883    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:01.290113    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.12.1 | docker load": (4.4998566s)
	I1210 07:32:01.290113    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 from cache
	I1210 07:32:01.290113    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:32:01.290113    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.34.3 | docker load"
	I1210 07:31:58.642350    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:58.668189    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:58.699633    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.699633    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:58.705036    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:58.738553    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.738553    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:58.742579    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:58.772414    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.772414    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:58.775757    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:58.804872    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.804872    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:58.808509    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:58.835398    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.835398    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:58.843124    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:58.871465    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.871465    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:58.875535    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:58.905029    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.905108    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:58.910324    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:58.953100    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.953100    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:58.953100    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:58.953100    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:59.012946    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:59.012946    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:59.052964    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:59.052964    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:59.146228    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:59.133962    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.135271    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.136467    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.137230    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.139872    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:59.133962    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.135271    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.136467    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.137230    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.139872    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:59.146228    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:59.146228    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:59.173200    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:59.173200    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:01.725170    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:01.746739    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:01.779670    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.779670    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:01.783967    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:01.812617    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.812617    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:01.817482    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:01.848083    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.848083    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:01.852344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:01.883648    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.883648    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:01.887655    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:01.918403    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.918403    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:01.922409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:01.961721    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.961721    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:01.969744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:01.998302    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.998302    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:02.003804    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:02.032315    1436 logs.go:282] 0 containers: []
	W1210 07:32:02.032315    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:02.032315    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:02.032315    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:02.096900    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:02.096900    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:02.136137    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:02.136137    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:02.227732    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:02.216961    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.218197    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.219273    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.220321    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.221438    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:02.216961    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.218197    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.219273    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.220321    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.221438    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:02.227732    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:02.227732    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:02.255236    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:02.255236    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:03.670542    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.34.3 | docker load": (2.3803916s)
	I1210 07:32:03.670542    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 from cache
	I1210 07:32:03.670542    2240 cache_images.go:125] Successfully loaded all cached images
	I1210 07:32:03.670542    2240 cache_images.go:94] duration metric: took 18.3531776s to LoadCachedImages
	I1210 07:32:03.670542    2240 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 docker true true} ...
	I1210 07:32:03.670542    2240 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-648600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml}
	I1210 07:32:03.674057    2240 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 07:32:03.753844    2240 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1210 07:32:03.753844    2240 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:32:03.753844    2240 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-648600 NodeName:custom-flannel-648600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:32:03.753844    2240 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "custom-flannel-648600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:32:03.758233    2240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 07:32:03.772950    2240 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.3': No such file or directory
	
	Initiating transfer...
	I1210 07:32:03.777455    2240 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.3
	I1210 07:32:03.790145    2240 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet.sha256
	I1210 07:32:03.790145    2240 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 07:32:03.790145    2240 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
	I1210 07:32:03.796039    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:32:03.796814    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm
	I1210 07:32:03.796843    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl
	I1210 07:32:03.817843    2240 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubeadm': No such file or directory
	I1210 07:32:03.818011    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubeadm --> /var/lib/minikube/binaries/v1.34.3/kubeadm (74027192 bytes)
	I1210 07:32:03.818298    2240 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubectl': No such file or directory
	I1210 07:32:03.818803    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubectl --> /var/lib/minikube/binaries/v1.34.3/kubectl (60563640 bytes)
	I1210 07:32:03.822978    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet
	I1210 07:32:03.833074    2240 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubelet': No such file or directory
	I1210 07:32:03.833638    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubelet --> /var/lib/minikube/binaries/v1.34.3/kubelet (59203876 bytes)
	I1210 07:32:05.838364    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:32:05.850364    2240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1210 07:32:05.870151    2240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 07:32:05.891336    2240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1210 07:32:05.915010    2240 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:32:05.922767    2240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:32:05.942185    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:32:06.099167    2240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:32:06.121581    2240 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600 for IP: 192.168.85.2
	I1210 07:32:06.121613    2240 certs.go:195] generating shared ca certs ...
	I1210 07:32:06.121640    2240 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.121920    2240 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 07:32:06.122447    2240 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 07:32:06.122578    2240 certs.go:257] generating profile certs ...
	I1210 07:32:06.122578    2240 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.key
	I1210 07:32:06.122578    2240 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.crt with IP's: []
	I1210 07:32:06.321440    2240 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.crt ...
	I1210 07:32:06.321440    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.crt: {Name:mk30a4977cc0d8ffd50678b3c23caa1e53531dd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.322223    2240 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.key ...
	I1210 07:32:06.322223    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.key: {Name:mke10982a653bbe15c8edebf2f43dc216f9268be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.323200    2240 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba
	I1210 07:32:06.323200    2240 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1210 07:32:06.341062    2240 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba ...
	I1210 07:32:06.341062    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba: {Name:mk0e9e825524eecc7aedfd18bb3bfe0b08c0466c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.342014    2240 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba ...
	I1210 07:32:06.342014    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba: {Name:mk42b80e536f4c7e07cd83fa60afbb5af1e6e8b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.342947    2240 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt
	I1210 07:32:06.354920    2240 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key
	I1210 07:32:06.355812    2240 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key
	I1210 07:32:06.355812    2240 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt with IP's: []
	I1210 07:32:06.438517    2240 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt ...
	I1210 07:32:06.438517    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt: {Name:mk49d63357d91f886b5db1adca8a8959ac8a2637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.439596    2240 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key ...
	I1210 07:32:06.439596    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key: {Name:mkd00fe816a16ba7636ee1faff5584095510b505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.454147    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 07:32:06.454968    2240 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 07:32:06.454968    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 07:32:06.455228    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 07:32:06.455417    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 07:32:06.455581    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 07:32:06.455768    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 07:32:06.456703    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:32:06.490234    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:32:06.516382    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:32:06.546895    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:32:06.579157    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1210 07:32:06.611194    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:32:06.642582    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:32:06.673947    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:32:06.702762    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 07:32:06.734932    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 07:32:06.763895    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:32:06.794884    2240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:32:06.824804    2240 ssh_runner.go:195] Run: openssl version
	I1210 07:32:06.839620    2240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.863187    2240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 07:32:06.881235    2240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.889982    2240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.896266    2240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.945361    2240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:32:06.965592    2240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11304.pem /etc/ssl/certs/51391683.0
	I1210 07:32:06.982615    2240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.000345    2240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 07:32:07.019650    2240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.028440    2240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.032681    2240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.080664    2240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:32:07.098781    2240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/113042.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:32:07.119820    2240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.138968    2240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:32:07.157588    2240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.166110    2240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.169123    2240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.218939    2240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:32:07.238245    2240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:32:07.255844    2240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:32:07.263714    2240 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:32:07.263714    2240 kubeadm.go:401] StartCluster: {Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCore
DNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:32:07.267520    2240 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 07:32:07.300048    2240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:32:07.317060    2240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:32:07.333647    2240 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:32:07.337744    2240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:32:07.353638    2240 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:32:07.353638    2240 kubeadm.go:158] found existing configuration files:
	
	I1210 07:32:07.357869    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:32:07.371538    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:32:07.375620    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:32:07.392582    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:32:07.408459    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:32:07.412872    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:32:07.431340    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:32:07.446697    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:32:07.451332    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:32:07.472431    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	W1210 07:32:03.602967    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:04.810034    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:04.838035    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:04.888039    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.888039    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:04.892025    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:04.955032    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.955032    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:04.959038    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:04.995031    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.995031    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:04.999034    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:05.035036    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.035036    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:05.040047    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:05.079034    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.079034    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:05.084038    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:05.123032    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.123032    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:05.128035    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:05.165033    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.165033    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:05.169028    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:05.205183    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.205183    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:05.205183    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:05.205183    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:05.248358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:05.248358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:05.349366    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:05.339165    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.340548    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.341613    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.342697    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.343919    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:05.339165    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.340548    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.341613    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.342697    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.343919    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:05.349366    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:05.349366    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:05.384377    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:05.384377    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:05.439383    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:05.439383    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:08.021198    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:08.045549    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:08.076568    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.076568    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:08.082429    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:08.113514    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.113514    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:08.117280    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:08.145243    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.145243    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:08.151846    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:08.182475    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.182475    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:08.186570    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:08.214500    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.214554    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:08.218698    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:08.250229    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.250229    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:08.254493    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:08.298394    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.298394    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:08.302457    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:08.331561    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.331561    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:08.331561    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:08.331561    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:08.368913    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:08.368913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 07:32:07.487983    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:32:07.492242    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:32:07.510557    2240 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:32:07.626646    2240 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 07:32:07.630270    2240 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1210 07:32:07.725615    2240 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1210 07:32:08.453343    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:08.442562    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.443722    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.445918    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.447466    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.448822    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:08.442562    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.443722    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.445918    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.447466    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.448822    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:08.453378    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:08.453417    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:08.488219    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:08.488219    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:08.533777    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:08.533777    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:11.100898    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:11.123310    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:11.154369    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.154369    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:11.158211    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:11.188349    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.188419    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:11.191999    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:11.218233    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.218263    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:11.222177    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:11.248157    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.248157    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:11.252075    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:11.280934    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.280934    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:11.284871    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:11.316173    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.316225    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:11.320150    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:11.350432    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.350494    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:11.354282    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:11.381767    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.381819    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:11.381819    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:11.381874    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:11.447079    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:11.447079    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:11.485987    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:11.485987    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:11.568313    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:11.555927    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.557482    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.559214    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.561941    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.562681    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:11.555927    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.557482    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.559214    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.561941    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.562681    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:11.568365    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:11.568408    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:11.599474    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:11.599518    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:13.641314    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:14.165429    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:14.189363    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:14.220411    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.220478    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:14.223878    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:14.253748    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.253798    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:14.257409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:14.288235    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.288235    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:14.291689    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:14.323349    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.323349    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:14.326680    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:14.355227    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.355227    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:14.358704    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:14.389648    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.389648    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:14.393032    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:14.424212    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.424212    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:14.427425    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:14.457834    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.457834    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:14.457834    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:14.457834    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:14.486053    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:14.486053    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:14.538138    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:14.538138    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:14.601542    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:14.601542    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:14.638885    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:14.638885    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:14.724482    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:14.715398    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.716451    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.717228    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.719965    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.720942    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:14.715398    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.716451    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.717228    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.719965    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.720942    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:17.229775    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:17.254115    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:17.287113    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.287113    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:17.292389    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:17.321661    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.321661    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:17.325615    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:17.360140    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.360140    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:17.366346    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:17.402963    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.402963    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:17.406830    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:17.436210    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.436210    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:17.440638    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:17.468315    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.468315    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:17.473002    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:17.516057    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.516057    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:17.519835    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:17.546705    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.546705    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:17.546705    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:17.546705    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:17.575272    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:17.575272    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:17.635882    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:17.635882    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:17.702984    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:17.702984    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:17.738444    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:17.738444    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:17.826329    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:17.816355    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.817532    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.818909    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.821510    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.822595    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:17.816355    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.817532    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.818909    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.821510    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.822595    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:20.331491    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:20.356562    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:20.393733    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.393733    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:20.397542    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:20.424969    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.424969    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:20.430097    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:20.461163    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.461163    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:20.464553    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:20.496041    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.496041    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:20.500386    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:20.528481    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.528481    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:20.533192    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:20.563678    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.563678    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:20.567914    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:20.595909    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.595909    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:20.601427    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:20.633125    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.633125    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:20.633125    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:20.633125    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:20.698742    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:20.698742    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:20.738675    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:20.738675    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:20.832925    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:20.823171    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.824395    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.825664    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.826500    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.828755    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:20.823171    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.824395    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.825664    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.826500    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.828755    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:20.833019    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:20.833050    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:20.863741    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:20.863802    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:23.679657    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:23.424742    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:23.449719    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:23.484921    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.484982    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:23.488818    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:23.520632    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.520718    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:23.525648    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:23.557856    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.557856    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:23.561789    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:23.593782    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.593782    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:23.596770    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:23.629689    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.629689    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:23.633972    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:23.677648    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.677648    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:23.681665    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:23.708735    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.708735    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:23.712484    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:23.742324    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.742324    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:23.742324    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:23.742324    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:23.809315    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:23.809315    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:23.849820    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:23.849820    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:23.932812    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:23.923514    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.925667    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.926732    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.928140    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.929123    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:23.923514    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.925667    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.926732    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.928140    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.929123    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:23.932860    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:23.932896    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:23.962977    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:23.962977    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:26.517198    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:26.545066    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:26.577323    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.577323    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:26.581824    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:26.621178    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.621178    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:26.624162    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:26.657711    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.657711    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:26.661872    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:26.690869    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.690869    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:26.693873    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:26.720949    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.720949    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:26.724289    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:26.757254    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.757254    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:26.761433    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:26.788617    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.788617    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:26.792015    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:26.820229    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.820229    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:26.820229    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:26.820229    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:26.886805    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:26.886805    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:26.926531    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:26.926531    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:27.014343    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:27.001829    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.003812    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.004706    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.007231    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.008445    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:27.001829    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.003812    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.004706    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.007231    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.008445    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:27.014420    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:27.014490    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:27.043375    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:27.043375    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:29.223517    2240 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1210 07:32:29.223517    2240 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:32:29.223517    2240 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:32:29.223517    2240 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:32:29.224269    2240 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:32:29.224467    2240 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:32:29.229027    2240 out.go:252]   - Generating certificates and keys ...
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:32:29.229660    2240 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:32:29.229827    2240 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:32:29.229911    2240 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:32:29.229911    2240 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-648600 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 07:32:29.229911    2240 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:32:29.230468    2240 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-648600 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 07:32:29.230658    2240 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:32:29.230768    2240 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:32:29.230900    2240 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:32:29.231503    2240 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:32:29.231582    2240 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:32:29.231582    2240 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:32:29.234181    2240 out.go:252]   - Booting up control plane ...
	I1210 07:32:29.234181    2240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:32:29.234181    2240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:32:29.234181    2240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:32:29.234702    2240 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:32:29.234874    2240 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:32:29.235782    2240 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:32:29.235782    2240 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002366911s
	I1210 07:32:29.235782    2240 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 07:32:29.235782    2240 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1210 07:32:29.236404    2240 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 07:32:29.236404    2240 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 07:32:29.236404    2240 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 12.235267696s
	I1210 07:32:29.236992    2240 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 12.434241439s
	I1210 07:32:29.236992    2240 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 14.5023353s
	I1210 07:32:29.236992    2240 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 07:32:29.236992    2240 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 07:32:29.237590    2240 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 07:32:29.237590    2240 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-648600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 07:32:29.237590    2240 kubeadm.go:319] [bootstrap-token] Using token: a4ld74.20ve6i3rm5ksexxo
	I1210 07:32:29.239648    2240 out.go:252]   - Configuring RBAC rules ...
	I1210 07:32:29.239648    2240 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 07:32:29.240674    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 07:32:29.240944    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 07:32:29.241383    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 07:32:29.241649    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 07:32:29.241668    2240 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 07:32:29.241668    2240 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 07:32:29.241668    2240 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 07:32:29.241668    2240 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 07:32:29.242197    2240 kubeadm.go:319] 
	I1210 07:32:29.242265    2240 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 07:32:29.242265    2240 kubeadm.go:319] 
	I1210 07:32:29.242265    2240 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 07:32:29.242265    2240 kubeadm.go:319] 
	I1210 07:32:29.242265    2240 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 07:32:29.242265    2240 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 07:32:29.242265    2240 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 07:32:29.242265    2240 kubeadm.go:319] 
	I1210 07:32:29.242850    2240 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 07:32:29.242850    2240 kubeadm.go:319] 
	I1210 07:32:29.242850    2240 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 07:32:29.242850    2240 kubeadm.go:319] 
	I1210 07:32:29.242850    2240 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 07:32:29.242850    2240 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 07:32:29.242850    2240 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 07:32:29.242850    2240 kubeadm.go:319] 
	I1210 07:32:29.243436    2240 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 07:32:29.243436    2240 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 07:32:29.243436    2240 kubeadm.go:319] 
	I1210 07:32:29.243436    2240 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token a4ld74.20ve6i3rm5ksexxo \
	I1210 07:32:29.243436    2240 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2501b4ef72a9792bee16677b10e6bb41bd6980e44395cdcd843b1bcd1dba3c3b \
	I1210 07:32:29.243436    2240 kubeadm.go:319] 	--control-plane 
	I1210 07:32:29.243436    2240 kubeadm.go:319] 
	I1210 07:32:29.244018    2240 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 07:32:29.244018    2240 kubeadm.go:319] 
	I1210 07:32:29.244018    2240 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token a4ld74.20ve6i3rm5ksexxo \
	I1210 07:32:29.244018    2240 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2501b4ef72a9792bee16677b10e6bb41bd6980e44395cdcd843b1bcd1dba3c3b 
	I1210 07:32:29.244018    2240 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1210 07:32:29.246745    2240 out.go:179] * Configuring testdata\kube-flannel.yaml (Container Networking Interface) ...
	I1210 07:32:29.266121    2240 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1210 07:32:29.270492    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1210 07:32:29.280075    2240 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1210 07:32:29.280075    2240 ssh_runner.go:362] scp testdata\kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4578 bytes)
	I1210 07:32:29.314572    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 07:32:29.754597    2240 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 07:32:29.759607    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-648600 minikube.k8s.io/updated_at=2025_12_10T07_32_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=custom-flannel-648600 minikube.k8s.io/primary=true
	I1210 07:32:29.759607    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:29.770603    2240 ops.go:34] apiserver oom_adj: -16
	I1210 07:32:29.895974    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:30.395328    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:30.896828    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:31.396414    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:31.896200    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:32.396778    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:29.599594    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:29.627372    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:29.659982    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.659982    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:29.662983    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:29.694702    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.694702    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:29.700318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:29.732602    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.732602    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:29.735594    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:29.769602    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.769602    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:29.773601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:29.805199    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.805199    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:29.808179    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:29.838578    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.838578    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:29.843641    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:29.878051    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.878051    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:29.881052    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:29.921782    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.921782    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:29.921782    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:29.921782    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:29.991328    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:29.991328    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:30.030358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:30.031358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:30.117974    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:30.107541    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.108605    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.109589    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.110674    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.111579    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:30.107541    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.108605    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.109589    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.110674    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.111579    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:30.118027    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:30.118027    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:30.147934    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:30.147934    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:32.704372    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:32.727813    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:32.762114    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.762228    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:32.767248    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:32.801905    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.801968    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:32.805939    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:32.836433    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.836579    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:32.840369    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:32.870265    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.870265    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:32.874049    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:32.904540    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.904540    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:32.908658    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:32.937325    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.937407    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:32.941191    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:32.974829    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.974893    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:32.980307    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:33.012207    1436 logs.go:282] 0 containers: []
	W1210 07:32:33.012268    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:33.012288    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:33.012288    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:33.062151    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:33.062151    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:33.126084    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:33.126084    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:33.164564    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:33.164564    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:33.252175    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:33.238911    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.239860    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.243396    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.244685    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.245447    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:33.238911    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.239860    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.243396    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.244685    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.245447    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:33.252175    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:33.252175    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:32.894984    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:33.397040    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:33.895777    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:34.084987    2240 kubeadm.go:1114] duration metric: took 4.3302518s to wait for elevateKubeSystemPrivileges
	I1210 07:32:34.085013    2240 kubeadm.go:403] duration metric: took 26.8208803s to StartCluster
	I1210 07:32:34.085095    2240 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:34.085299    2240 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:32:34.087295    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:34.088397    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 07:32:34.088397    2240 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:32:34.088932    2240 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:32:34.089115    2240 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-648600"
	I1210 07:32:34.089115    2240 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-648600"
	I1210 07:32:34.089115    2240 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-648600"
	I1210 07:32:34.089115    2240 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-648600"
	I1210 07:32:34.089272    2240 host.go:66] Checking if "custom-flannel-648600" exists ...
	I1210 07:32:34.089454    2240 config.go:182] Loaded profile config "custom-flannel-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:32:34.091048    2240 out.go:179] * Verifying Kubernetes components...
	I1210 07:32:34.099313    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:32:34.100384    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:32:34.101389    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:32:34.165121    2240 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-648600"
	I1210 07:32:34.165121    2240 host.go:66] Checking if "custom-flannel-648600" exists ...
	I1210 07:32:34.166107    2240 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:32:34.174109    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:32:34.177116    2240 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:32:34.177116    2240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:32:34.181109    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:32:34.228110    2240 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:32:34.228110    2240 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:32:34.231111    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:32:34.232110    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:32:34.295102    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:32:34.361698    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 07:32:34.577307    2240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:32:34.743911    2240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:32:34.748484    2240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:32:35.145540    2240 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1210 07:32:35.149854    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:32:35.210514    2240 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-648600" to be "Ready" ...
	I1210 07:32:35.684992    2240 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-648600" context rescaled to 1 replicas
	I1210 07:32:35.860846    2240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1123448s)
	I1210 07:32:35.863841    2240 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1210 07:32:35.869842    2240 addons.go:530] duration metric: took 1.7814171s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1210 07:32:37.217134    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	W1210 07:32:33.712552    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:35.789401    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:35.810140    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:35.846049    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.846049    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:35.850173    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:35.881840    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.881840    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:35.884841    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:35.913190    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.913190    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:35.916698    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:35.953160    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.953160    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:35.956661    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:35.990725    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.990725    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:35.994362    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:36.027153    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.027153    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:36.031157    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:36.060142    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.060142    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:36.063139    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:36.096214    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.096291    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:36.096291    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:36.096291    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:36.136455    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:36.136455    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:36.228827    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:36.215892    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.217040    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.218413    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.219992    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.221006    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:36.215892    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.217040    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.218413    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.219992    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.221006    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:36.228910    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:36.228944    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:36.260979    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:36.261040    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:36.321946    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:36.321946    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:32:39.747934    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	W1210 07:32:42.215582    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	I1210 07:32:38.893525    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:38.918010    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:38.951682    1436 logs.go:282] 0 containers: []
	W1210 07:32:38.951682    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:38.954817    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:38.986714    1436 logs.go:282] 0 containers: []
	W1210 07:32:38.986714    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:38.992805    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:39.024242    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.024242    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:39.028333    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:39.057504    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.057504    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:39.063178    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:39.093362    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.093362    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:39.097488    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:39.130652    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.130690    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:39.133596    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:39.163556    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.163556    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:39.168915    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:39.202587    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.202587    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:39.202587    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:39.202587    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:39.268647    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:39.268647    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:39.308297    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:39.308297    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:39.438181    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:39.395105    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.396191    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.398354    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.399763    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.401460    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:39.395105    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.396191    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.398354    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.399763    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.401460    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:39.438181    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:39.438181    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:39.467128    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:39.467176    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:42.023591    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:42.047765    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:42.080166    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.080166    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:42.084928    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:42.114905    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.114905    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:42.118820    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:42.148212    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.148212    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:42.151728    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:42.182256    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.182256    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:42.185843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:42.216232    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.216276    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:42.219555    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:42.249214    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.249214    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:42.253469    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:42.281977    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.281977    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:42.285971    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:42.313212    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.314210    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:42.314210    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:42.314210    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:42.382226    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:42.382226    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:42.424358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:42.424358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:42.509116    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:42.500360    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.501554    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.503040    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.504307    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.505418    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:42.500360    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.501554    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.503040    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.504307    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.505418    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:42.509116    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:42.509116    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:42.536096    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:42.536096    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:44.217341    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	I1210 07:32:45.217929    2240 node_ready.go:49] node "custom-flannel-648600" is "Ready"
	I1210 07:32:45.217929    2240 node_ready.go:38] duration metric: took 10.0071872s for node "custom-flannel-648600" to be "Ready" ...
	I1210 07:32:45.217929    2240 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:32:45.221913    2240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:45.241224    2240 api_server.go:72] duration metric: took 11.1520714s to wait for apiserver process to appear ...
	I1210 07:32:45.241248    2240 api_server.go:88] waiting for apiserver healthz status ...
	I1210 07:32:45.241297    2240 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58199/healthz ...
	I1210 07:32:45.255531    2240 api_server.go:279] https://127.0.0.1:58199/healthz returned 200:
	ok
	I1210 07:32:45.259632    2240 api_server.go:141] control plane version: v1.34.3
	I1210 07:32:45.259696    2240 api_server.go:131] duration metric: took 18.4479ms to wait for apiserver health ...
	I1210 07:32:45.259716    2240 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 07:32:45.268791    2240 system_pods.go:59] 7 kube-system pods found
	I1210 07:32:45.268849    2240 system_pods.go:61] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.268849    2240 system_pods.go:61] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.268894    2240 system_pods.go:74] duration metric: took 9.14ms to wait for pod list to return data ...
	I1210 07:32:45.268935    2240 default_sa.go:34] waiting for default service account to be created ...
	I1210 07:32:45.273316    2240 default_sa.go:45] found service account: "default"
	I1210 07:32:45.273353    2240 default_sa.go:55] duration metric: took 4.4181ms for default service account to be created ...
	I1210 07:32:45.273353    2240 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 07:32:45.280767    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:45.280945    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.280945    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.281064    2240 retry.go:31] will retry after 250.377545ms: missing components: kube-dns
	I1210 07:32:45.539061    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:45.539616    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.539616    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.539718    2240 retry.go:31] will retry after 289.337772ms: missing components: kube-dns
	I1210 07:32:45.840329    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:45.840329    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.840329    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.840528    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.840528    2240 retry.go:31] will retry after 309.196772ms: missing components: kube-dns
	I1210 07:32:46.157293    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:46.157293    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:46.157293    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:46.157293    2240 retry.go:31] will retry after 407.04525ms: missing components: kube-dns
	I1210 07:32:46.592154    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:46.592265    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:46.592265    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:46.592297    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:46.592297    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:46.592318    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:46.592318    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:46.592318    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:46.592318    2240 retry.go:31] will retry after 495.94184ms: missing components: kube-dns
	I1210 07:32:47.094557    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:47.094557    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:47.094557    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:47.094557    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:47.094557    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:47.095074    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:47.095074    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:47.095074    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Running
	I1210 07:32:47.095074    2240 retry.go:31] will retry after 778.892273ms: missing components: kube-dns
	W1210 07:32:43.745046    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:45.087059    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:45.110662    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:45.142133    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.142133    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:45.146341    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:45.178232    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.178232    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:45.182428    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:45.211507    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.211507    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:45.215400    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:45.245805    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.246346    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:45.251790    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:45.299793    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.299793    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:45.304394    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:45.332689    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.332689    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:45.338438    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:45.371989    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.372039    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:45.376951    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:45.411498    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.411558    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:45.411558    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:45.411617    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:45.488591    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:45.489591    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:45.529135    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:45.529135    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:45.627238    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:45.616715    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.617907    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.618773    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.622470    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.623968    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:45.616715    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.617907    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.618773    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.622470    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.623968    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:45.627238    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:45.627238    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:45.659505    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:45.659505    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:48.224164    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:48.247748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:48.276146    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.276253    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:48.279224    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:48.307561    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.307587    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:48.311247    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:48.342268    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.342268    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:48.346481    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:48.379504    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.379504    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:48.384265    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:47.881744    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:47.881744    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:47.881744    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Running
	I1210 07:32:47.882297    2240 retry.go:31] will retry after 913.098856ms: missing components: kube-dns
	I1210 07:32:48.802046    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:48.802046    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Running
	I1210 07:32:48.802046    2240 system_pods.go:126] duration metric: took 3.5286376s to wait for k8s-apps to be running ...
	I1210 07:32:48.802046    2240 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 07:32:48.807470    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:32:48.825598    2240 system_svc.go:56] duration metric: took 23.5517ms WaitForService to wait for kubelet
	I1210 07:32:48.825598    2240 kubeadm.go:587] duration metric: took 14.7364354s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:32:48.825689    2240 node_conditions.go:102] verifying NodePressure condition ...
	I1210 07:32:48.831503    2240 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1210 07:32:48.831503    2240 node_conditions.go:123] node cpu capacity is 16
	I1210 07:32:48.831503    2240 node_conditions.go:105] duration metric: took 5.8138ms to run NodePressure ...
	I1210 07:32:48.831503    2240 start.go:242] waiting for startup goroutines ...
	I1210 07:32:48.831503    2240 start.go:247] waiting for cluster config update ...
	I1210 07:32:48.831503    2240 start.go:256] writing updated cluster config ...
	I1210 07:32:48.837195    2240 ssh_runner.go:195] Run: rm -f paused
	I1210 07:32:48.844148    2240 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:32:48.853005    2240 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dhgpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.864384    2240 pod_ready.go:94] pod "coredns-66bc5c9577-dhgpj" is "Ready"
	I1210 07:32:48.864472    2240 pod_ready.go:86] duration metric: took 11.4282ms for pod "coredns-66bc5c9577-dhgpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.867887    2240 pod_ready.go:83] waiting for pod "etcd-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.876367    2240 pod_ready.go:94] pod "etcd-custom-flannel-648600" is "Ready"
	I1210 07:32:48.876367    2240 pod_ready.go:86] duration metric: took 8.4794ms for pod "etcd-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.880884    2240 pod_ready.go:83] waiting for pod "kube-apiserver-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.888453    2240 pod_ready.go:94] pod "kube-apiserver-custom-flannel-648600" is "Ready"
	I1210 07:32:48.888453    2240 pod_ready.go:86] duration metric: took 7.5694ms for pod "kube-apiserver-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.891939    2240 pod_ready.go:83] waiting for pod "kube-controller-manager-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:49.254863    2240 pod_ready.go:94] pod "kube-controller-manager-custom-flannel-648600" is "Ready"
	I1210 07:32:49.255015    2240 pod_ready.go:86] duration metric: took 363.0699ms for pod "kube-controller-manager-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:49.454047    2240 pod_ready.go:83] waiting for pod "kube-proxy-vrrgr" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:49.854254    2240 pod_ready.go:94] pod "kube-proxy-vrrgr" is "Ready"
	I1210 07:32:49.854329    2240 pod_ready.go:86] duration metric: took 400.2758ms for pod "kube-proxy-vrrgr" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.054101    2240 pod_ready.go:83] waiting for pod "kube-scheduler-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.453713    2240 pod_ready.go:94] pod "kube-scheduler-custom-flannel-648600" is "Ready"
	I1210 07:32:50.453713    2240 pod_ready.go:86] duration metric: took 399.6056ms for pod "kube-scheduler-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.453713    2240 pod_ready.go:40] duration metric: took 1.6095401s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:32:50.552047    2240 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 07:32:50.555856    2240 out.go:179] * Done! kubectl is now configured to use "custom-flannel-648600" cluster and "default" namespace by default
	I1210 07:32:48.417490    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.417490    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:48.420482    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:48.463340    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.463340    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:48.466961    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:48.498101    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.498101    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:48.501771    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:48.532099    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.532099    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:48.532099    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:48.532099    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:48.612165    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:48.602526   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.604459   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.605839   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.608058   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.609316   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:48.602526   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.604459   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.605839   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.608058   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.609316   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:48.612165    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:48.612165    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:48.639467    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:48.639467    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:48.708307    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:48.708378    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:48.769132    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:48.769193    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:51.313991    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:51.338965    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:51.379596    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.379666    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:51.384637    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:51.439084    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.439084    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:51.443082    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:51.481339    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.481375    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:51.485798    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:51.515086    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.515086    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:51.519086    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:51.549657    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.549745    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:51.553762    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:51.594636    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.594636    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:51.601112    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:51.634850    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.634897    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:51.638417    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:51.668658    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.668658    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:51.668658    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:51.668658    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:51.743421    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:51.743421    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:51.785980    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:51.785980    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:51.881612    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:51.875319   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.876439   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.877390   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.878342   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.879220   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:51.875319   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.876439   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.877390   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.878342   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.879220   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:51.881612    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:51.881612    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:51.915211    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:51.915211    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:53.781958    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:54.477323    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:54.503322    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:54.543324    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.543324    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:54.547318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:54.584329    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.584329    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:54.588316    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:54.620313    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.620313    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:54.623313    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:54.656331    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.656331    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:54.662335    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:54.698319    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.698319    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:54.702320    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:54.730323    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.730323    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:54.734335    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:54.767329    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.767329    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:54.772326    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:54.807324    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.807324    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:54.807324    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:54.807324    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:54.885116    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:54.885116    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:54.922078    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:54.922078    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:55.025433    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:55.017851   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.018872   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.019791   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.020881   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.021812   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:55.017851   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.018872   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.019791   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.020881   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.021812   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:55.025433    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:55.025433    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:55.062949    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:55.062949    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:57.627400    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:57.652685    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:57.682605    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.682695    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:57.687397    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:57.715588    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.715643    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:57.719155    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:57.746386    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.746433    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:57.751074    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:57.786162    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.786225    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:57.790161    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:57.821543    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.821543    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:57.825865    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:57.854873    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.854873    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:57.858370    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:57.908764    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.908764    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:57.912923    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:57.943110    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.943156    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:57.943156    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:57.943220    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:58.044764    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:58.032727   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.034310   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.035457   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.038242   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.039578   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:58.032727   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.034310   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.035457   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.038242   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.039578   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:58.044764    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:58.044764    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:58.074136    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:58.074136    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:58.130739    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:58.130739    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:58.198319    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:58.198319    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:00.746286    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:00.773024    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:00.801991    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.801991    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:00.806103    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:00.839474    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.839538    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:00.843748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:00.872704    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.872704    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:00.879471    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:00.910099    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.910099    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:00.913675    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:00.942535    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.942587    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:00.946706    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:00.978075    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.978075    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:00.981585    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:01.010831    1436 logs.go:282] 0 containers: []
	W1210 07:33:01.010862    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:01.014542    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:01.046630    1436 logs.go:282] 0 containers: []
	W1210 07:33:01.046630    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:01.046630    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:01.046630    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:01.110794    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:01.110794    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:01.152129    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:01.152129    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:01.244044    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:01.232452   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.233728   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.234672   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.238201   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.239249   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:01.232452   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.233728   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.234672   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.238201   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.239249   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:01.244044    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:01.244044    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:01.278465    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:01.278465    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:33:03.818627    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:03.833114    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:03.855801    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:03.886510    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.886573    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:03.890099    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:03.920839    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.920839    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:03.927061    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:03.956870    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.956870    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:03.960568    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:03.992698    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.992784    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:03.996483    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:04.027029    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.027149    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:04.030240    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:04.063615    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.063615    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:04.067578    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:04.097874    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.097921    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:04.102194    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:04.133751    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.133751    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:04.133751    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:04.133751    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:04.200457    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:04.200457    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:04.240408    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:04.240408    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:04.321404    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:04.310792   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.311874   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.312796   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.314967   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.316599   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:04.310792   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.311874   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.312796   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.314967   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.316599   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:04.321404    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:04.321404    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:04.348691    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:04.348788    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:06.910838    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:06.942433    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:06.977118    1436 logs.go:282] 0 containers: []
	W1210 07:33:06.977156    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:06.981007    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:07.010984    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.010984    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:07.015418    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:07.044766    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.044766    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:07.048710    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:07.081347    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.081347    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:07.085264    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:07.120524    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.120524    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:07.125158    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:07.162231    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.162231    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:07.167511    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:07.199783    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.199783    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:07.203843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:07.237945    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.237945    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:07.237945    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:07.237945    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:07.303014    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:07.303014    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:07.339790    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:07.339790    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:07.433533    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:07.422610   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.423608   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.426487   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.427518   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.428665   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:07.422610   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.423608   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.426487   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.427518   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.428665   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:07.433578    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:07.433622    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:07.463534    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:07.463534    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:10.019483    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:10.042553    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:10.075861    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.075861    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:10.079883    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:10.112806    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.112855    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:10.118076    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:10.149529    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.149529    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:10.154764    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:10.183943    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.183943    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:10.188277    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:10.225075    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.225109    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:10.229148    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:10.258752    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.258831    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:10.262260    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:10.290375    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.290375    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:10.294114    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:10.324184    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.324184    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:10.324184    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:10.324257    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:10.389060    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:10.389060    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:10.428762    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:10.428762    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:10.512419    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:10.502106   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.503175   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.504155   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.507624   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.508833   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:10.502106   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.503175   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.504155   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.507624   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.508833   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:10.512419    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:10.512419    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:10.539151    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:10.539151    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:13.096376    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:13.120463    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:13.154821    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.154821    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:13.158241    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:13.186136    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.186172    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:13.190126    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:13.217850    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.217850    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:13.220856    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:13.254422    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.254422    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:13.258405    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:13.290565    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.290650    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:13.294141    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:13.324205    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.324205    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:13.327944    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:13.359148    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.359148    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:13.363435    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:13.394783    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.394783    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:13.394783    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:13.394783    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:33:13.858746    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:13.472122    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:13.472122    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:13.512554    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:13.512554    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:13.606866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:13.598440   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.599732   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.600869   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.602040   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.603324   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:13.598440   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.599732   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.600869   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.602040   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.603324   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:13.606866    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:13.606866    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:13.640509    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:13.640509    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:16.200969    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:16.227853    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:16.259466    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.259503    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:16.263863    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:16.305661    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.305714    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:16.309344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:16.349702    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.349702    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:16.354239    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:16.389642    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.389669    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:16.393404    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:16.422749    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.422749    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:16.428043    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:16.462871    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.462871    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:16.466863    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:16.500036    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.500036    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:16.505217    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:16.545533    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.545563    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:16.545563    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:16.545640    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:16.616718    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:16.616718    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:16.662358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:16.662414    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:16.771496    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:16.759784   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.760601   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.764427   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.766044   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.767471   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:16.759784   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.760601   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.764427   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.766044   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.767471   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:16.771539    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:16.771539    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:16.802169    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:16.802169    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:19.361839    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:19.384627    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:19.418054    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.418054    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:19.423334    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:19.449315    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.450326    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:19.453336    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:19.479318    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.479318    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:19.483409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:19.515568    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.515568    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:19.518948    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:19.547403    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.547403    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:19.550914    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:19.582586    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.582643    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:19.586506    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:19.617655    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.617655    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:19.621651    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:19.653692    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.653797    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:19.653820    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:19.653820    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:19.720756    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:19.720756    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:19.788168    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:19.788168    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:19.825175    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:19.825175    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:19.937176    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:19.910996   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912147   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912626   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.933700   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.934740   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:19.910996   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912147   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912626   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.933700   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.934740   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:19.938191    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:19.938191    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:22.472081    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:22.499318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:22.535642    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.535642    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:22.540234    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:22.575580    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.575580    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:22.578579    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:22.611585    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.612584    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:22.615587    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:22.645600    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.645600    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:22.649593    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:22.680588    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.680588    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:22.684584    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:22.713587    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.713587    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:22.716592    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:22.745591    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.745591    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:22.748591    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:22.777133    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.777133    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:22.777133    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:22.777133    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:22.866913    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:22.856823   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.858179   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.859339   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860428   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860817   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:22.856823   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.858179   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.859339   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860428   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860817   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:22.866913    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:22.866913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:22.895817    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:22.895817    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:22.963449    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:22.964449    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:23.024022    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:23.024022    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:33:23.891822    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:25.581257    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:25.606450    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:25.638465    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.638465    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:25.641459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:25.675461    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.675461    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:25.678460    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:25.712472    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.712472    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:25.715460    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:25.742469    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.742469    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:25.745459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:25.778468    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.778468    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:25.782466    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:25.810470    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.810470    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:25.813459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:25.842959    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.843962    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:25.846951    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:25.879265    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.879265    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:25.879265    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:25.879265    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:25.923140    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:25.923140    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:26.006825    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:25.994746   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.996044   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.997646   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.998827   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:26.000023   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:25.994746   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.996044   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.997646   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.998827   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:26.000023   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:26.006825    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:26.006825    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:26.036172    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:26.036172    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:26.088180    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:26.088180    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:28.665087    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:28.689823    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:28.725678    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.725714    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:28.728663    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:28.759105    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.759146    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:28.763209    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:28.794743    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.794743    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:28.798927    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:28.832979    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.832979    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:28.836972    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:28.869676    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.869676    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:28.874394    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:28.909690    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.909690    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:28.914703    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:28.948685    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.948685    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:28.951687    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:28.983688    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.983688    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:28.983688    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:28.983688    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:29.038702    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:29.038702    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:29.102687    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:29.102687    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:29.157695    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:29.157695    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:29.254070    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:29.238189   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.239080   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.244117   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.246991   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.248197   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:29.238189   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.239080   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.244117   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.246991   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.248197   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:29.254070    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:29.254070    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:31.790873    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:31.815324    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:31.848719    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.848719    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:31.853126    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:31.894569    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.894618    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:31.901660    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:31.945924    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.945924    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:31.949930    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:31.980922    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.980922    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:31.983920    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:32.015920    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.015920    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:32.018924    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:32.055014    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.055014    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:32.059907    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:32.088299    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.088299    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:32.091301    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:32.122373    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.122373    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:32.122373    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:32.122373    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:32.200241    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:32.200241    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:32.235857    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:32.236857    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:32.346052    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:32.333533   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.334563   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.336148   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.337055   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.339822   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:32.333533   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.334563   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.336148   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.337055   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.339822   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:32.346052    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:32.346052    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:32.374360    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:32.374360    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:33:33.924414    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:34.931799    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:34.953865    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:34.983147    1436 logs.go:282] 0 containers: []
	W1210 07:33:34.983147    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:34.986833    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:35.017888    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.017888    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:35.021662    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:35.051231    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.051231    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:35.055612    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:35.089316    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.089316    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:35.093193    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:35.121682    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.121682    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:35.126091    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:35.158874    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.158874    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:35.165874    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:35.201117    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.201117    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:35.206353    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:35.236228    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.236228    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:35.236228    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:35.236228    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:35.267932    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:35.267994    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:35.320951    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:35.320951    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:35.383537    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:35.383589    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:35.425468    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:35.425468    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:35.528144    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:35.516901   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.517862   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.520128   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.521034   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.522130   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:35.516901   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.517862   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.520128   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.521034   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.522130   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:38.032492    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:38.054909    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:38.083957    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.083957    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:38.087695    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:38.116008    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.116008    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:38.121353    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:38.151236    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.151236    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:38.157561    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:38.191692    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.191739    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:38.195638    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:38.232952    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.232952    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:38.240283    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:38.267392    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.267392    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:38.270392    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:38.302982    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.302982    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:38.306527    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:38.337370    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.337370    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:38.337663    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:38.337663    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:38.378149    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:38.378149    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:38.496679    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:38.485115   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.486129   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.488286   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.489938   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.491114   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:38.485115   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.486129   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.488286   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.489938   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.491114   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:38.496679    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:38.496679    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:38.523508    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:38.524031    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:38.575827    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:38.575926    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:41.142591    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:41.169193    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:41.202128    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.202197    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:41.205840    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:41.232108    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.232108    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:41.236042    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:41.266240    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.266240    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:41.270256    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:41.299391    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.299914    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:41.305198    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:41.334815    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.334888    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:41.338221    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:41.366830    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.366830    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:41.371846    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:41.403239    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.403307    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:41.406504    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:41.435444    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.435507    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:41.435507    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:41.435507    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:41.495280    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:41.495280    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:41.540098    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:41.540098    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:41.631123    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:41.619480   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.620700   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.621495   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.624397   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.626223   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:41.619480   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.620700   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.621495   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.624397   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.626223   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:41.631123    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:41.631123    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:41.659481    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:41.660004    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:33:43.958857    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:44.218114    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:44.245684    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:44.277948    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.277948    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:44.281784    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:44.308191    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.308236    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:44.311628    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:44.338002    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.338064    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:44.341334    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:44.369051    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.369051    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:44.373446    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:44.401355    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.401355    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:44.404625    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:44.435928    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.436021    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:44.438720    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:44.468518    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.468518    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:44.472419    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:44.505185    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.505185    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:44.505185    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:44.505185    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:44.542000    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:44.542000    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:44.637866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:44.628159   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.629590   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.630676   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.631884   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.633066   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:44.628159   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.629590   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.630676   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.631884   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.633066   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:44.637866    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:44.637866    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:44.668149    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:44.668149    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:44.722118    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:44.722118    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:47.287165    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:47.315701    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:47.348691    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.348691    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:47.352599    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:47.382757    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.382757    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:47.386956    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:47.416756    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.416756    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:47.420505    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:47.447567    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.447631    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:47.451327    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:47.481198    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.481198    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:47.484905    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:47.515752    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.515752    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:47.519521    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:47.549878    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.549878    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:47.553160    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:47.580738    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.580738    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:47.580738    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:47.580738    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:47.620996    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:47.620996    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:47.717751    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:47.708186   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.709887   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.710982   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.712203   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.713847   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:47.708186   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.709887   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.710982   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.712203   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.713847   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:47.717751    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:47.717751    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:47.747052    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:47.747052    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:47.806827    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:47.806907    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:50.374572    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:50.402608    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:50.434845    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.434845    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:50.439264    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:50.472884    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.472884    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:50.476675    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:50.506875    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.506875    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:50.510516    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:50.544104    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.544104    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:50.547823    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:50.582563    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.582563    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:50.586716    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:50.617520    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.617520    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:50.621651    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:50.654870    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.654924    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:50.658739    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:50.687650    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.687650    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:50.687650    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:50.687650    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:50.741903    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:50.741970    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:50.801979    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:50.801979    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:50.841061    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:50.841061    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:50.929313    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:50.919053   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.920402   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.921292   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.924075   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.925956   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:50.919053   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.920402   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.921292   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.924075   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.925956   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:50.929313    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:50.929313    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1210 07:33:53.996838    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:53.461932    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:53.489152    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:53.525676    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.525676    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:53.529484    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:53.564410    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.564438    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:53.567827    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:53.614175    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.614215    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:53.620260    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:53.655138    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.655138    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:53.659487    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:53.692591    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.692591    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:53.696809    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:53.736843    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.736843    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:53.741782    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:53.770910    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.770910    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:53.775145    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:53.805756    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.805756    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:53.805756    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:53.805756    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:53.868923    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:53.868923    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:53.909599    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:53.909599    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:53.994728    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:53.985324   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.986526   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.987499   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.989286   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.991204   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:53.985324   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.986526   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.987499   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.989286   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.991204   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:53.994728    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:53.994728    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:54.023183    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:54.023245    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:56.581055    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:56.606311    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:56.640781    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.640781    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:56.645032    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:56.673780    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.673780    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:56.680498    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:56.708843    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.708843    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:56.711839    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:56.743689    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.743689    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:56.747149    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:56.776428    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.776490    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:56.780173    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:56.810171    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.810171    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:56.815860    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:56.843104    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.843150    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:56.846843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:56.875180    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.875180    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:56.875180    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:56.875260    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:56.937905    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:56.937905    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:56.978984    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:56.978984    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:57.072981    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:57.057332   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.058272   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.063163   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.064098   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.066478   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:57.057332   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.058272   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.063163   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.064098   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.066478   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:57.072981    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:57.072981    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:57.103275    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:57.103275    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:59.657150    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:59.680473    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:59.717538    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.717538    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:59.721115    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:59.750445    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.750445    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:59.754192    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:59.783080    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.783609    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:59.786966    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:59.815381    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.815381    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:59.818634    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:59.846978    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.847073    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:59.850723    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:59.881504    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.881531    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:59.885538    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:59.912091    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.912091    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:59.915555    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:59.945836    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.945836    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:59.945836    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:59.945918    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:00.010932    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:00.010932    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:00.050450    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:00.050450    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:00.135132    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:00.122005   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.123035   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.124724   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.126083   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.127122   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:00.122005   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.123035   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.124724   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.126083   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.127122   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:00.135132    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:00.135132    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:00.162951    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:00.162951    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:02.722322    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:02.747735    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:02.782353    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.782423    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:02.785942    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:02.815562    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.815562    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:02.819580    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:02.851940    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.851940    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:02.855858    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:02.883743    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.883743    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:02.887230    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:02.919540    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.919540    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:02.923123    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:02.951385    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.951439    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:02.955922    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:02.985112    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.985172    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:02.988380    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:03.020559    1436 logs.go:282] 0 containers: []
	W1210 07:34:03.020590    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:03.020590    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:03.020643    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:03.113834    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:03.100874   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.101891   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.104988   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.106378   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.107577   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:03.100874   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.101891   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.104988   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.106378   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.107577   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:03.113834    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:03.113834    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:03.143434    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:03.143494    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:03.195505    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:03.195505    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:03.260582    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:03.260582    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:34:04.034666    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:34:05.805687    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:05.830820    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:05.867098    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.867098    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:05.870201    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:05.902724    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.902724    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:05.906452    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:05.937581    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.937660    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:05.941081    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:05.970812    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.970812    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:05.974826    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:06.005319    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.005319    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:06.009298    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:06.036331    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.036367    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:06.040396    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:06.070470    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.070522    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:06.073716    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:06.105829    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.105902    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:06.105902    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:06.105902    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:06.168761    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:06.168761    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:06.209503    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:06.209503    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:06.300233    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:06.287348   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.288220   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.291316   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.292659   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.293382   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:06.287348   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.288220   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.291316   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.292659   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.293382   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:06.300233    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:06.300233    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:06.325856    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:06.326404    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:34:12.432519    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1210 07:34:12.432519    6044 node_ready.go:38] duration metric: took 6m0.0003472s for node "no-preload-099700" to be "Ready" ...
	I1210 07:34:12.435520    6044 out.go:203] 
	W1210 07:34:12.437521    6044 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:34:12.437521    6044 out.go:285] * 
	W1210 07:34:12.439520    6044 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:34:12.443519    6044 out.go:203] 
	I1210 07:34:08.888339    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:08.915007    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:08.945370    1436 logs.go:282] 0 containers: []
	W1210 07:34:08.945370    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:08.948912    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:08.978717    1436 logs.go:282] 0 containers: []
	W1210 07:34:08.978744    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:08.982191    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:09.014137    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.014137    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:09.019817    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:09.049527    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.049527    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:09.053402    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:09.083494    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.083519    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:09.087029    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:09.115269    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.115306    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:09.117873    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:09.155291    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.155351    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:09.159388    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:09.189238    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.189238    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:09.189238    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:09.189238    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:09.276866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:09.264194   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.265301   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.267799   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269023   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269943   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:09.264194   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.265301   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.267799   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269023   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269943   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:09.276924    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:09.276924    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:09.303083    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:09.303603    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:09.350941    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:09.350941    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:09.414406    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:09.414406    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:11.970539    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:11.997446    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:12.029543    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.029543    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:12.033746    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:12.061992    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.061992    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:12.066520    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:12.095801    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.095801    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:12.099364    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:12.129880    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.129949    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:12.133782    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:12.162555    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.162555    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:12.167228    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:12.196229    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.196229    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:12.200137    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:12.226729    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.226729    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:12.230279    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:12.255730    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.255730    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:12.255730    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:12.255730    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:12.318642    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:12.318642    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:12.364065    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:12.364065    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:12.469524    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:12.459675   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.460966   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.462240   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.463300   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.464338   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:12.459675   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.460966   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.462240   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.463300   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.464338   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:12.469574    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:12.469574    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:12.496807    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:12.496950    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> Docker <==
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794207271Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794291179Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794301480Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794308081Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794314981Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794339784Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794382688Z" level=info msg="Initializing buildkit"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.916550520Z" level=info msg="Completed buildkit initialization"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.923562810Z" level=info msg="Daemon has completed initialization"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.923807334Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.923950448Z" level=info msg="API listen on [::]:2376"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.923820636Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 07:28:08 no-preload-099700 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 07:28:09 no-preload-099700 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Loaded network plugin cni"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 07:28:09 no-preload-099700 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:14.595231    8373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:14.596314    8373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:14.598422    8373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:14.599992    8373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:14.601223    8373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +7.347496] CPU: 6 PID: 490841 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fe73ddc4b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7fe73ddc4af6.
	[  +0.000000] RSP: 002b:00007ffc57a05a90 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000000] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.867258] CPU: 5 PID: 491006 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f1a7acb4b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f1a7acb4af6.
	[  +0.000001] RSP: 002b:00007ffe19029200 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 07:32] tmpfs: Unknown parameter 'noswap'
	[ +15.541609] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 07:34:14 up  3:02,  0 user,  load average: 1.90, 4.18, 4.59
	Linux no-preload-099700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:34:11 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:34:12 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 478.
	Dec 10 07:34:12 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:12 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:12 no-preload-099700 kubelet[8208]: E1210 07:34:12.415634    8208 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:34:12 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:34:12 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:34:13 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 479.
	Dec 10 07:34:13 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:13 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:13 no-preload-099700 kubelet[8222]: E1210 07:34:13.163678    8222 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:34:13 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:34:13 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:34:13 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 10 07:34:13 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:13 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:13 no-preload-099700 kubelet[8250]: E1210 07:34:13.924254    8250 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:34:13 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:34:13 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:34:14 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 10 07:34:14 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:14 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:34:14 no-preload-099700 kubelet[8382]: E1210 07:34:14.664312    8382 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:34:14 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:34:14 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-099700 -n no-preload-099700
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-099700 -n no-preload-099700: exit status 2 (644.6382ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-099700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (378.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (382.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-525200 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-rc.1
E1210 07:29:32.146386   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:29:45.974506   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:29:50.389272   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:29:50.396638   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:29:50.407887   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:29:50.429440   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:29:50.471436   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:29:50.553439   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:29:50.715202   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:29:51.038082   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:29:51.679707   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:29:52.961899   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:29:55.524536   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:29:58.157035   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-144100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:30:00.647217   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:30:10.889773   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:30:12.527326   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:30:25.867936   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-144100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-525200 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-rc.1: exit status 105 (6m16.2630607s)

                                                
                                                
-- stdout --
	* [newest-cni-525200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "newest-cni-525200" primary control-plane node in "newest-cni-525200" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:29:28.308596    1436 out.go:360] Setting OutFile to fd 1668 ...
	I1210 07:29:28.399493    1436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:29:28.399493    1436 out.go:374] Setting ErrFile to fd 1136...
	I1210 07:29:28.399493    1436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:29:28.414500    1436 out.go:368] Setting JSON to false
	I1210 07:29:28.421489    1436 start.go:133] hostinfo: {"hostname":"minikube4","uptime":10700,"bootTime":1765341068,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 07:29:28.421489    1436 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 07:29:28.424492    1436 out.go:179] * [newest-cni-525200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 07:29:28.428495    1436 notify.go:221] Checking for updates...
	I1210 07:29:28.430499    1436 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:29:28.432502    1436 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:29:28.435490    1436 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 07:29:28.437490    1436 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:29:28.439495    1436 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:29:28.442501    1436 config.go:182] Loaded profile config "newest-cni-525200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:29:28.443490    1436 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:29:28.561491    1436 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 07:29:28.564497    1436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:29:28.868346    1436 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:80 OomKillDisable:true NGoroutines:90 SystemTime:2025-12-10 07:29:28.847117446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:29:28.876337    1436 out.go:179] * Using the docker driver based on existing profile
	I1210 07:29:28.879343    1436 start.go:309] selected driver: docker
	I1210 07:29:28.879343    1436 start.go:927] validating driver "docker" against &{Name:newest-cni-525200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-525200 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:29:28.879343    1436 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:29:28.929357    1436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:29:29.184372    1436 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:29:29.163141778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:29:29.185347    1436 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1210 07:29:29.185347    1436 cni.go:84] Creating CNI manager for ""
	I1210 07:29:29.185347    1436 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 07:29:29.185347    1436 start.go:353] cluster config:
	{Name:newest-cni-525200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-525200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:29:29.188347    1436 out.go:179] * Starting "newest-cni-525200" primary control-plane node in "newest-cni-525200" cluster
	I1210 07:29:29.191370    1436 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 07:29:29.195346    1436 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:29:29.197340    1436 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 07:29:29.197340    1436 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 07:29:29.197340    1436 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1210 07:29:29.197340    1436 cache.go:65] Caching tarball of preloaded images
	I1210 07:29:29.197340    1436 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1210 07:29:29.197340    1436 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1210 07:29:29.197340    1436 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\config.json ...
	I1210 07:29:29.274351    1436 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:29:29.274351    1436 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1210 07:29:29.274351    1436 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:29:29.274351    1436 start.go:360] acquireMachinesLock for newest-cni-525200: {Name:mkd446da0a6d37aeadfde49218ee5d3bd06b715b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:29:29.274351    1436 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-525200"
	I1210 07:29:29.274351    1436 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:29:29.274351    1436 fix.go:54] fixHost starting: 
	I1210 07:29:29.281343    1436 cli_runner.go:164] Run: docker container inspect newest-cni-525200 --format={{.State.Status}}
	I1210 07:29:29.332352    1436 fix.go:112] recreateIfNeeded on newest-cni-525200: state=Stopped err=<nil>
	W1210 07:29:29.332352    1436 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:29:29.335363    1436 out.go:252] * Restarting existing docker container for "newest-cni-525200" ...
	I1210 07:29:29.338354    1436 cli_runner.go:164] Run: docker start newest-cni-525200
	I1210 07:29:30.109315    1436 cli_runner.go:164] Run: docker container inspect newest-cni-525200 --format={{.State.Status}}
	I1210 07:29:30.170329    1436 kic.go:430] container "newest-cni-525200" state is running.
	I1210 07:29:30.180328    1436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-525200
	I1210 07:29:30.240305    1436 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\config.json ...
	I1210 07:29:30.242311    1436 machine.go:94] provisionDockerMachine start ...
	I1210 07:29:30.245309    1436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:29:30.313310    1436 main.go:143] libmachine: Using SSH client type: native
	I1210 07:29:30.314307    1436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57760 <nil> <nil>}
	I1210 07:29:30.314307    1436 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:29:30.316324    1436 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:29:33.485765    1436 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-525200
	
	I1210 07:29:33.485765    1436 ubuntu.go:182] provisioning hostname "newest-cni-525200"
	I1210 07:29:33.488756    1436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:29:33.543757    1436 main.go:143] libmachine: Using SSH client type: native
	I1210 07:29:33.543757    1436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57760 <nil> <nil>}
	I1210 07:29:33.543757    1436 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-525200 && echo "newest-cni-525200" | sudo tee /etc/hostname
	I1210 07:29:33.733165    1436 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-525200
	
	I1210 07:29:33.736137    1436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:29:33.792213    1436 main.go:143] libmachine: Using SSH client type: native
	I1210 07:29:33.793210    1436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57760 <nil> <nil>}
	I1210 07:29:33.793210    1436 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-525200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-525200/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-525200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:29:33.967538    1436 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:29:33.967538    1436 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 07:29:33.967538    1436 ubuntu.go:190] setting up certificates
	I1210 07:29:33.967538    1436 provision.go:84] configureAuth start
	I1210 07:29:33.971530    1436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-525200
	I1210 07:29:34.018547    1436 provision.go:143] copyHostCerts
	I1210 07:29:34.019546    1436 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 07:29:34.019546    1436 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 07:29:34.019546    1436 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 07:29:34.020534    1436 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 07:29:34.020534    1436 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 07:29:34.020534    1436 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 07:29:34.021538    1436 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 07:29:34.021538    1436 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 07:29:34.021538    1436 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 07:29:34.022531    1436 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-525200 san=[127.0.0.1 192.168.121.2 localhost minikube newest-cni-525200]
	I1210 07:29:34.212630    1436 provision.go:177] copyRemoteCerts
	I1210 07:29:34.216633    1436 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:29:34.222627    1436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:29:34.285208    1436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57760 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-525200\id_rsa Username:docker}
	I1210 07:29:34.401558    1436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:29:34.428912    1436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:29:34.457713    1436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:29:34.487719    1436 provision.go:87] duration metric: took 520.1731ms to configureAuth
	I1210 07:29:34.487719    1436 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:29:34.487719    1436 config.go:182] Loaded profile config "newest-cni-525200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:29:34.491727    1436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:29:34.545710    1436 main.go:143] libmachine: Using SSH client type: native
	I1210 07:29:34.546710    1436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57760 <nil> <nil>}
	I1210 07:29:34.546710    1436 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 07:29:34.716396    1436 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 07:29:34.716396    1436 ubuntu.go:71] root file system type: overlay
	I1210 07:29:34.716396    1436 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 07:29:34.719420    1436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:29:34.775418    1436 main.go:143] libmachine: Using SSH client type: native
	I1210 07:29:34.776399    1436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57760 <nil> <nil>}
	I1210 07:29:34.776399    1436 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 07:29:34.977748    1436 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 07:29:34.981217    1436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:29:35.040347    1436 main.go:143] libmachine: Using SSH client type: native
	I1210 07:29:35.040347    1436 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 57760 <nil> <nil>}
	I1210 07:29:35.040347    1436 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 07:29:35.222643    1436 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:29:35.222643    1436 machine.go:97] duration metric: took 4.9802543s to provisionDockerMachine
	I1210 07:29:35.222643    1436 start.go:293] postStartSetup for "newest-cni-525200" (driver="docker")
	I1210 07:29:35.222643    1436 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:29:35.226639    1436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:29:35.230644    1436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:29:35.292642    1436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57760 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-525200\id_rsa Username:docker}
	I1210 07:29:35.427164    1436 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:29:35.434961    1436 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:29:35.434961    1436 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:29:35.434961    1436 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 07:29:35.434961    1436 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 07:29:35.435987    1436 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 07:29:35.441497    1436 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:29:35.456999    1436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 07:29:35.491144    1436 start.go:296] duration metric: took 268.497ms for postStartSetup
	I1210 07:29:35.494144    1436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:29:35.497144    1436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:29:35.552188    1436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57760 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-525200\id_rsa Username:docker}
	I1210 07:29:35.688470    1436 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:29:35.699955    1436 fix.go:56] duration metric: took 6.4255042s for fixHost
	I1210 07:29:35.699955    1436 start.go:83] releasing machines lock for "newest-cni-525200", held for 6.4255042s
	I1210 07:29:35.703954    1436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-525200
	I1210 07:29:35.757962    1436 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 07:29:35.761960    1436 ssh_runner.go:195] Run: cat /version.json
	I1210 07:29:35.762959    1436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:29:35.766977    1436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:29:35.822955    1436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57760 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-525200\id_rsa Username:docker}
	I1210 07:29:35.822955    1436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57760 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-525200\id_rsa Username:docker}
	W1210 07:29:35.948950    1436 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 07:29:35.957596    1436 ssh_runner.go:195] Run: systemctl --version
	I1210 07:29:35.979960    1436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:29:35.989820    1436 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:29:35.995332    1436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:29:36.013575    1436 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:29:36.013575    1436 start.go:496] detecting cgroup driver to use...
	I1210 07:29:36.013575    1436 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:29:36.013575    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:29:36.046897    1436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:29:36.070905    1436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1210 07:29:36.091894    1436 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 07:29:36.091894    1436 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 07:29:36.153049    1436 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:29:36.159051    1436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:29:36.181652    1436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:29:36.202643    1436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:29:36.220814    1436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:29:36.238864    1436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:29:36.259169    1436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:29:36.283168    1436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:29:36.301153    1436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:29:36.319156    1436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:29:36.336150    1436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:29:36.355157    1436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:29:36.519750    1436 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:29:36.696631    1436 start.go:496] detecting cgroup driver to use...
	I1210 07:29:36.696631    1436 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:29:36.702635    1436 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 07:29:36.728947    1436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:29:36.749919    1436 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:29:36.820922    1436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:29:36.842928    1436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:29:36.864925    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:29:36.892924    1436 ssh_runner.go:195] Run: which cri-dockerd
	I1210 07:29:36.904933    1436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 07:29:36.918931    1436 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 07:29:36.945993    1436 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 07:29:37.100141    1436 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 07:29:37.255880    1436 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 07:29:37.256132    1436 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 07:29:37.284282    1436 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 07:29:37.307271    1436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:29:37.417902    1436 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 07:29:38.405469    1436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:29:38.435470    1436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 07:29:38.464477    1436 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1210 07:29:38.499485    1436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:29:38.525460    1436 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 07:29:38.715472    1436 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 07:29:38.906467    1436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:29:39.062476    1436 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 07:29:39.094099    1436 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 07:29:39.120073    1436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:29:39.284679    1436 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 07:29:39.430667    1436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:29:39.454674    1436 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 07:29:39.459668    1436 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 07:29:39.467662    1436 start.go:564] Will wait 60s for crictl version
	I1210 07:29:39.472669    1436 ssh_runner.go:195] Run: which crictl
	I1210 07:29:39.484662    1436 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:29:39.540699    1436 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 07:29:39.544667    1436 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:29:39.596253    1436 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:29:39.650267    1436 out.go:252] * Preparing Kubernetes v1.35.0-rc.1 on Docker 29.1.2 ...
	I1210 07:29:39.654254    1436 cli_runner.go:164] Run: docker exec -t newest-cni-525200 dig +short host.docker.internal
	I1210 07:29:39.808266    1436 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 07:29:39.814267    1436 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 07:29:39.822264    1436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:29:39.843268    1436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:29:39.918269    1436 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1210 07:29:39.920267    1436 kubeadm.go:884] updating cluster {Name:newest-cni-525200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-525200 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:29:39.920267    1436 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 07:29:39.925272    1436 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 07:29:39.965262    1436 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1210 07:29:39.965262    1436 docker.go:621] Images already preloaded, skipping extraction
	I1210 07:29:39.970270    1436 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 07:29:40.011279    1436 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	registry.k8s.io/kube-proxy:v1.35.0-rc.1
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1210 07:29:40.011279    1436 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:29:40.011279    1436 kubeadm.go:935] updating node { 192.168.121.2 8443 v1.35.0-rc.1 docker true true} ...
	I1210 07:29:40.011279    1436 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-525200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.121.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-525200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:29:40.015275    1436 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 07:29:40.119271    1436 cni.go:84] Creating CNI manager for ""
	I1210 07:29:40.119271    1436 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 07:29:40.119271    1436 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1210 07:29:40.119271    1436 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.121.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-525200 NodeName:newest-cni-525200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.121.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.121.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:29:40.119271    1436 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.121.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-525200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.121.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.121.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:29:40.125263    1436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1210 07:29:40.142277    1436 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:29:40.147296    1436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:29:40.162268    1436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1210 07:29:40.188276    1436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1210 07:29:40.216271    1436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1210 07:29:40.244280    1436 ssh_runner.go:195] Run: grep 192.168.121.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:29:40.254276    1436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.121.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:29:40.278265    1436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:29:40.456281    1436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:29:40.483286    1436 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200 for IP: 192.168.121.2
	I1210 07:29:40.483286    1436 certs.go:195] generating shared ca certs ...
	I1210 07:29:40.483286    1436 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:29:40.484268    1436 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 07:29:40.484268    1436 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 07:29:40.485272    1436 certs.go:257] generating profile certs ...
	I1210 07:29:40.485272    1436 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\client.key
	I1210 07:29:40.485272    1436 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\apiserver.key.96f8e4b6
	I1210 07:29:40.486273    1436 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\proxy-client.key
	I1210 07:29:40.487284    1436 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 07:29:40.487284    1436 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 07:29:40.488274    1436 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 07:29:40.488274    1436 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 07:29:40.488274    1436 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 07:29:40.488274    1436 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 07:29:40.489281    1436 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 07:29:40.490287    1436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:29:40.527871    1436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:29:40.564677    1436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:29:40.605691    1436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:29:40.641702    1436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:29:40.677691    1436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:29:40.713687    1436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:29:40.754690    1436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-525200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:29:40.786681    1436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:29:40.818687    1436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 07:29:40.851683    1436 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 07:29:40.887697    1436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:29:40.923697    1436 ssh_runner.go:195] Run: openssl version
	I1210 07:29:40.946703    1436 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 07:29:40.967693    1436 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 07:29:40.988704    1436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 07:29:40.996695    1436 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 07:29:41.002685    1436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 07:29:41.063687    1436 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:29:41.085695    1436 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 07:29:41.103699    1436 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 07:29:41.124687    1436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 07:29:41.132687    1436 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 07:29:41.137687    1436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 07:29:41.218695    1436 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:29:41.238707    1436 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:29:41.259692    1436 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:29:41.284690    1436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:29:41.292699    1436 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:29:41.298691    1436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:29:41.364698    1436 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:29:41.386718    1436 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:29:41.402693    1436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:29:41.475699    1436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:29:41.546697    1436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:29:41.596695    1436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:29:41.658692    1436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:29:41.725474    1436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:29:41.787358    1436 kubeadm.go:401] StartCluster: {Name:newest-cni-525200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-525200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:29:41.793363    1436 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 07:29:41.836756    1436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:29:41.850603    1436 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:29:41.850603    1436 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:29:41.861586    1436 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:29:41.877585    1436 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:29:41.882601    1436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:29:41.942582    1436 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-525200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:29:41.943583    1436 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-525200" cluster setting kubeconfig missing "newest-cni-525200" context setting]
	I1210 07:29:41.945587    1436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:29:41.981591    1436 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:29:41.995582    1436 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1210 07:29:41.995582    1436 kubeadm.go:602] duration metric: took 144.9771ms to restartPrimaryControlPlane
	I1210 07:29:41.995582    1436 kubeadm.go:403] duration metric: took 208.221ms to StartCluster
	I1210 07:29:41.995582    1436 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:29:41.995582    1436 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:29:41.996585    1436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:29:41.997586    1436 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.121.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:29:41.997586    1436 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:29:41.997586    1436 addons.go:70] Setting default-storageclass=true in profile "newest-cni-525200"
	I1210 07:29:41.997586    1436 addons.go:70] Setting dashboard=true in profile "newest-cni-525200"
	I1210 07:29:41.997586    1436 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-525200"
	I1210 07:29:41.997586    1436 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-525200"
	I1210 07:29:41.997586    1436 addons.go:239] Setting addon dashboard=true in "newest-cni-525200"
	W1210 07:29:41.997586    1436 addons.go:248] addon dashboard should already be in state true
	I1210 07:29:41.997586    1436 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-525200"
	I1210 07:29:41.997586    1436 host.go:66] Checking if "newest-cni-525200" exists ...
	I1210 07:29:41.997586    1436 host.go:66] Checking if "newest-cni-525200" exists ...
	I1210 07:29:41.997586    1436 config.go:182] Loaded profile config "newest-cni-525200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:29:42.000595    1436 out.go:179] * Verifying Kubernetes components...
	I1210 07:29:42.007585    1436 cli_runner.go:164] Run: docker container inspect newest-cni-525200 --format={{.State.Status}}
	I1210 07:29:42.007585    1436 cli_runner.go:164] Run: docker container inspect newest-cni-525200 --format={{.State.Status}}
	I1210 07:29:42.008586    1436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:29:42.009601    1436 cli_runner.go:164] Run: docker container inspect newest-cni-525200 --format={{.State.Status}}
	I1210 07:29:42.069591    1436 addons.go:239] Setting addon default-storageclass=true in "newest-cni-525200"
	I1210 07:29:42.069591    1436 host.go:66] Checking if "newest-cni-525200" exists ...
	I1210 07:29:42.069591    1436 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:29:42.069591    1436 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1210 07:29:42.072590    1436 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:29:42.072590    1436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:29:42.076599    1436 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1210 07:29:42.076599    1436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:29:42.078599    1436 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1210 07:29:42.078599    1436 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1210 07:29:42.079591    1436 cli_runner.go:164] Run: docker container inspect newest-cni-525200 --format={{.State.Status}}
	I1210 07:29:42.084593    1436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:29:42.150599    1436 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:29:42.150599    1436 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:29:42.150599    1436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57760 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-525200\id_rsa Username:docker}
	I1210 07:29:42.153598    1436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57760 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-525200\id_rsa Username:docker}
	I1210 07:29:42.154600    1436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:29:42.220598    1436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57760 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-525200\id_rsa Username:docker}
	I1210 07:29:42.224599    1436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:29:42.265608    1436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-525200
	I1210 07:29:42.299617    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:29:42.301600    1436 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1210 07:29:42.301600    1436 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1210 07:29:42.342587    1436 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:29:42.346587    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:42.386200    1436 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1210 07:29:42.386200    1436 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1210 07:29:42.419201    1436 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1210 07:29:42.419201    1436 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1210 07:29:42.460195    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:29:42.471192    1436 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1210 07:29:42.472200    1436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1210 07:29:42.479199    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:42.479199    1436 retry.go:31] will retry after 134.428049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:42.501199    1436 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1210 07:29:42.501199    1436 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1210 07:29:42.550203    1436 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1210 07:29:42.550203    1436 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1210 07:29:42.582192    1436 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1210 07:29:42.582192    1436 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W1210 07:29:42.592201    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:42.592201    1436 retry.go:31] will retry after 146.570194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:42.609202    1436 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1210 07:29:42.609202    1436 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1210 07:29:42.620218    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:29:42.643208    1436 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:29:42.643208    1436 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1210 07:29:42.674204    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:29:42.735212    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:42.735212    1436 retry.go:31] will retry after 463.268584ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:42.744195    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:29:42.798211    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:42.798211    1436 retry.go:31] will retry after 342.192108ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:42.848199    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:29:42.855198    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:42.855198    1436 retry.go:31] will retry after 397.99515ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:43.145214    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:29:43.205211    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:29:43.259218    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:29:43.265207    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:43.265207    1436 retry.go:31] will retry after 269.49272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:29:43.319220    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:43.319220    1436 retry.go:31] will retry after 709.161462ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:43.347212    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:29:43.363204    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:43.363204    1436 retry.go:31] will retry after 577.464008ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:43.540208    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:29:43.645970    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:43.646072    1436 retry.go:31] will retry after 305.274131ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:43.847357    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:43.944933    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:29:43.955952    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:29:44.031347    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:44.031347    1436 retry.go:31] will retry after 610.179508ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:44.034336    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:29:44.062360    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:44.062360    1436 retry.go:31] will retry after 903.061585ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:29:44.126283    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:44.126361    1436 retry.go:31] will retry after 821.572416ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:44.346935    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:44.646969    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:29:44.741167    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:44.741167    1436 retry.go:31] will retry after 1.730471574s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:44.847677    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:44.953582    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:29:44.969934    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:29:45.055700    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:45.055700    1436 retry.go:31] will retry after 945.995436ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:29:45.064697    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:45.064697    1436 retry.go:31] will retry after 874.052379ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:45.347840    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:45.848198    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:45.944066    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:29:46.006787    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:29:46.069400    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:46.069478    1436 retry.go:31] will retry after 2.763296879s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:29:46.086201    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:46.086201    1436 retry.go:31] will retry after 1.374659536s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:46.348388    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:46.477854    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:29:46.554646    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:46.554646    1436 retry.go:31] will retry after 2.745766561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:46.847648    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:47.347859    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:47.466825    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:29:47.550462    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:47.550462    1436 retry.go:31] will retry after 4.15212252s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:47.847826    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:48.347903    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:48.837818    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1210 07:29:48.848121    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:29:48.936511    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:48.936511    1436 retry.go:31] will retry after 3.231363732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:49.305338    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:29:49.346876    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:29:49.401053    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:49.401053    1436 retry.go:31] will retry after 1.887515402s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:49.848986    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:50.347507    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:50.848553    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:51.293145    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:29:51.347950    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:29:51.392186    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:51.392237    1436 retry.go:31] will retry after 3.939153178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:51.707567    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:29:51.795187    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:51.796006    1436 retry.go:31] will retry after 2.841030027s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:51.847450    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:52.172955    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:29:52.261350    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:52.261394    1436 retry.go:31] will retry after 5.263042341s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:52.347965    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:52.847449    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:53.347262    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:53.847837    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:54.347839    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:54.642076    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:29:54.738360    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:54.738469    1436 retry.go:31] will retry after 4.911206819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:54.847467    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:55.335941    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:29:55.346953    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1210 07:29:55.424428    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:55.424428    1436 retry.go:31] will retry after 6.126793832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:55.848131    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:56.347125    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:56.847359    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:57.350378    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:57.530454    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:29:57.633468    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:57.633468    1436 retry.go:31] will retry after 8.718763s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:57.846022    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:58.348044    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:58.850732    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:59.348718    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:29:59.655780    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:29:59.747751    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:59.747751    1436 retry.go:31] will retry after 6.50622884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:29:59.848752    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:00.347902    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:00.846457    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:01.348514    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:01.558135    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:30:01.652134    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:01.652134    1436 retry.go:31] will retry after 8.953860755s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:01.849144    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:02.349848    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:02.847829    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:03.347365    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:03.847613    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:04.349090    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:04.849229    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:05.348884    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:05.847973    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:06.260807    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:30:06.349894    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:06.358906    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:30:06.359911    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:06.360899    1436 retry.go:31] will retry after 11.195248278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:30:06.516636    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:06.516636    1436 retry.go:31] will retry after 6.734184478s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:06.848655    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:07.347655    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:07.847662    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:08.349329    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:08.848795    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:09.350141    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:09.847633    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:10.347957    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:10.611323    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:30:10.694676    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:10.694676    1436 retry.go:31] will retry after 9.279636902s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:10.849902    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:11.348437    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:11.849279    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:12.349406    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:12.848148    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:13.254917    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:30:13.340865    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:13.340936    1436 retry.go:31] will retry after 8.591772271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:13.348125    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:13.849623    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:14.350086    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:14.846441    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:15.351203    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:15.852212    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:16.350004    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:16.852194    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:17.348661    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:17.565334    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:30:17.647746    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:17.648754    1436 retry.go:31] will retry after 13.276814542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:17.847292    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:18.352337    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:18.849167    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:19.347111    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:19.850954    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:19.981789    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:30:20.070864    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:20.070864    1436 retry.go:31] will retry after 13.903959469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:20.348460    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:20.850512    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:21.352443    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:21.850097    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:21.939189    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:30:22.033625    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:22.033729    1436 retry.go:31] will retry after 14.307678562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:22.350394    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:22.848464    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:23.347953    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:23.847521    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:24.347360    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:24.850031    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:25.348117    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:25.849953    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:26.348531    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:26.848535    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:27.347820    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:27.851137    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:28.349771    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:28.849810    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:29.347782    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:29.849776    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:30.347068    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:30.849467    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:30.930929    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:30:31.015149    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:31.015149    1436 retry.go:31] will retry after 28.034289725s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:31.349669    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:31.851260    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:32.347669    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:32.848367    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:33.349741    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:33.848712    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:33.980013    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1210 07:30:34.065338    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:34.065401    1436 retry.go:31] will retry after 33.160455618s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:34.349129    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:34.848054    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:35.348485    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:35.848612    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:36.348356    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:36.349331    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:30:36.473887    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:36.473887    1436 retry.go:31] will retry after 43.780860004s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:30:36.846723    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:37.349928    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:37.848693    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:38.348406    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:38.848502    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:39.354974    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:39.853215    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:40.348874    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:40.849308    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:41.351524    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:41.848845    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:42.347264    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:30:42.390010    1436 logs.go:282] 0 containers: []
	W1210 07:30:42.390010    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:30:42.393727    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:30:42.426692    1436 logs.go:282] 0 containers: []
	W1210 07:30:42.426692    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:30:42.430961    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:30:42.461810    1436 logs.go:282] 0 containers: []
	W1210 07:30:42.461810    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:30:42.466266    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:30:42.494911    1436 logs.go:282] 0 containers: []
	W1210 07:30:42.494945    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:30:42.498696    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:30:42.531712    1436 logs.go:282] 0 containers: []
	W1210 07:30:42.531712    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:30:42.535498    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:30:42.560818    1436 logs.go:282] 0 containers: []
	W1210 07:30:42.560818    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:30:42.563819    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:30:42.592415    1436 logs.go:282] 0 containers: []
	W1210 07:30:42.592415    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:30:42.597092    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:30:42.629427    1436 logs.go:282] 0 containers: []
	W1210 07:30:42.629427    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:30:42.629427    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:30:42.629427    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:30:42.666946    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:30:42.666946    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:30:42.749751    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:30:42.741482    3371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:42.742489    3371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:42.744216    3371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:42.745361    3371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:42.746266    3371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:30:42.741482    3371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:42.742489    3371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:42.744216    3371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:42.745361    3371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:42.746266    3371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:30:42.749751    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:30:42.749751    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:30:42.778521    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:30:42.778521    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:30:42.847176    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:30:42.847248    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:30:45.432413    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:45.456857    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:30:45.487196    1436 logs.go:282] 0 containers: []
	W1210 07:30:45.487196    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:30:45.490638    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:30:45.519065    1436 logs.go:282] 0 containers: []
	W1210 07:30:45.519065    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:30:45.525748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:30:45.553645    1436 logs.go:282] 0 containers: []
	W1210 07:30:45.553645    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:30:45.557593    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:30:45.586179    1436 logs.go:282] 0 containers: []
	W1210 07:30:45.586179    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:30:45.589921    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:30:45.616164    1436 logs.go:282] 0 containers: []
	W1210 07:30:45.616164    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:30:45.620749    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:30:45.651665    1436 logs.go:282] 0 containers: []
	W1210 07:30:45.651665    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:30:45.656676    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:30:45.689243    1436 logs.go:282] 0 containers: []
	W1210 07:30:45.689243    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:30:45.693595    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:30:45.724629    1436 logs.go:282] 0 containers: []
	W1210 07:30:45.724629    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:30:45.724629    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:30:45.724629    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:30:45.809503    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:30:45.799734    3540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:45.801092    3540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:45.803539    3540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:45.804692    3540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:45.805775    3540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:30:45.799734    3540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:45.801092    3540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:45.803539    3540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:45.804692    3540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:45.805775    3540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:30:45.809503    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:30:45.809503    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:30:45.840022    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:30:45.840022    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:30:45.894759    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:30:45.894759    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:30:45.956522    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:30:45.956522    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:30:48.506538    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:48.528025    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:30:48.560038    1436 logs.go:282] 0 containers: []
	W1210 07:30:48.560038    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:30:48.565339    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:30:48.593565    1436 logs.go:282] 0 containers: []
	W1210 07:30:48.593565    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:30:48.597409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:30:48.624027    1436 logs.go:282] 0 containers: []
	W1210 07:30:48.624103    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:30:48.627537    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:30:48.658372    1436 logs.go:282] 0 containers: []
	W1210 07:30:48.658372    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:30:48.664049    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:30:48.692721    1436 logs.go:282] 0 containers: []
	W1210 07:30:48.692795    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:30:48.696062    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:30:48.724856    1436 logs.go:282] 0 containers: []
	W1210 07:30:48.724886    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:30:48.729681    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:30:48.781797    1436 logs.go:282] 0 containers: []
	W1210 07:30:48.781797    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:30:48.785928    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:30:48.817909    1436 logs.go:282] 0 containers: []
	W1210 07:30:48.817909    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:30:48.817982    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:30:48.817982    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:30:48.882110    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:30:48.882110    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:30:48.920852    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:30:48.920852    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:30:49.022834    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:30:49.013178    3709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:49.014618    3709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:49.016155    3709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:49.017596    3709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:49.018731    3709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:30:49.013178    3709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:49.014618    3709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:49.016155    3709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:49.017596    3709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:49.018731    3709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:30:49.022834    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:30:49.022834    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:30:49.049390    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:30:49.049390    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:30:51.602185    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:51.624195    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:30:51.655195    1436 logs.go:282] 0 containers: []
	W1210 07:30:51.655195    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:30:51.659196    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:30:51.690191    1436 logs.go:282] 0 containers: []
	W1210 07:30:51.690191    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:30:51.693195    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:30:51.725460    1436 logs.go:282] 0 containers: []
	W1210 07:30:51.725460    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:30:51.730438    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:30:51.774863    1436 logs.go:282] 0 containers: []
	W1210 07:30:51.774863    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:30:51.778809    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:30:51.813602    1436 logs.go:282] 0 containers: []
	W1210 07:30:51.813602    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:30:51.817601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:30:51.847156    1436 logs.go:282] 0 containers: []
	W1210 07:30:51.847156    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:30:51.852015    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:30:51.885615    1436 logs.go:282] 0 containers: []
	W1210 07:30:51.885615    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:30:51.888611    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:30:51.917627    1436 logs.go:282] 0 containers: []
	W1210 07:30:51.917627    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:30:51.917627    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:30:51.917627    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:30:51.981905    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:30:51.981905    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:30:52.021010    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:30:52.021010    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:30:52.107693    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:30:52.098932    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:52.100248    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:52.101381    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:52.102308    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:52.103813    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:30:52.098932    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:52.100248    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:52.101381    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:52.102308    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:52.103813    3874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:30:52.107693    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:30:52.107693    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:30:52.136689    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:30:52.136689    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:30:54.692346    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:54.714659    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:30:54.744985    1436 logs.go:282] 0 containers: []
	W1210 07:30:54.744985    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:30:54.748654    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:30:54.780867    1436 logs.go:282] 0 containers: []
	W1210 07:30:54.780933    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:30:54.785677    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:30:54.816685    1436 logs.go:282] 0 containers: []
	W1210 07:30:54.816685    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:30:54.820345    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:30:54.852521    1436 logs.go:282] 0 containers: []
	W1210 07:30:54.852521    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:30:54.856069    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:30:54.885523    1436 logs.go:282] 0 containers: []
	W1210 07:30:54.885523    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:30:54.888911    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:30:54.920730    1436 logs.go:282] 0 containers: []
	W1210 07:30:54.920791    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:30:54.924848    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:30:54.958352    1436 logs.go:282] 0 containers: []
	W1210 07:30:54.958418    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:30:54.962502    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:30:54.994306    1436 logs.go:282] 0 containers: []
	W1210 07:30:54.994306    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:30:54.994306    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:30:54.994306    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:30:55.055443    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:30:55.055443    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:30:55.099573    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:30:55.099573    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:30:55.188651    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:30:55.177427    4047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:55.178364    4047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:55.181623    4047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:55.182639    4047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:55.184133    4047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:30:55.177427    4047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:55.178364    4047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:55.181623    4047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:55.182639    4047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:55.184133    4047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:30:55.188651    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:30:55.188651    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:30:55.218586    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:30:55.218613    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:30:57.773492    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:30:57.792491    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:30:57.824520    1436 logs.go:282] 0 containers: []
	W1210 07:30:57.824520    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:30:57.828492    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:30:57.860492    1436 logs.go:282] 0 containers: []
	W1210 07:30:57.860492    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:30:57.863497    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:30:57.904719    1436 logs.go:282] 0 containers: []
	W1210 07:30:57.904719    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:30:57.909716    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:30:57.943634    1436 logs.go:282] 0 containers: []
	W1210 07:30:57.943634    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:30:57.947607    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:30:57.984926    1436 logs.go:282] 0 containers: []
	W1210 07:30:57.984926    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:30:57.987924    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:30:58.017939    1436 logs.go:282] 0 containers: []
	W1210 07:30:58.017939    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:30:58.020932    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:30:58.051932    1436 logs.go:282] 0 containers: []
	W1210 07:30:58.051932    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:30:58.054930    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:30:58.090517    1436 logs.go:282] 0 containers: []
	W1210 07:30:58.090517    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:30:58.090517    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:30:58.090517    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:30:58.161533    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:30:58.161533    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:30:58.205512    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:30:58.205512    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:30:58.291513    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:30:58.280167    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:58.280989    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:58.283371    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:58.284290    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:58.286553    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:30:58.280167    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:58.280989    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:58.283371    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:58.284290    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:30:58.286553    4217 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:30:58.291513    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:30:58.291513    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:30:58.319510    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:30:58.319510    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:30:59.055537    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1210 07:30:59.164541    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:30:59.164541    1436 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:31:00.876565    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:00.904570    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:00.946567    1436 logs.go:282] 0 containers: []
	W1210 07:31:00.946567    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:00.951561    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:00.995581    1436 logs.go:282] 0 containers: []
	W1210 07:31:00.995581    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:00.999571    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:01.040563    1436 logs.go:282] 0 containers: []
	W1210 07:31:01.040563    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:01.045577    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:01.083566    1436 logs.go:282] 0 containers: []
	W1210 07:31:01.083566    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:01.089574    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:01.139580    1436 logs.go:282] 0 containers: []
	W1210 07:31:01.139580    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:01.143596    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:01.182775    1436 logs.go:282] 0 containers: []
	W1210 07:31:01.182775    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:01.186770    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:01.220366    1436 logs.go:282] 0 containers: []
	W1210 07:31:01.220366    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:01.224357    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:01.256372    1436 logs.go:282] 0 containers: []
	W1210 07:31:01.256372    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:01.256372    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:01.256372    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:01.322367    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:01.322367    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:01.360370    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:01.360370    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:01.451367    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:01.440198    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:01.441279    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:01.442493    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:01.443955    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:01.444842    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:01.440198    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:01.441279    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:01.442493    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:01.443955    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:01.444842    4384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:01.451367    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:01.451367    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:01.481367    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:01.482365    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:04.048191    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:04.072435    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:04.119705    1436 logs.go:282] 0 containers: []
	W1210 07:31:04.119750    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:04.124241    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:04.164987    1436 logs.go:282] 0 containers: []
	W1210 07:31:04.164987    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:04.168982    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:04.201990    1436 logs.go:282] 0 containers: []
	W1210 07:31:04.201990    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:04.205978    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:04.234986    1436 logs.go:282] 0 containers: []
	W1210 07:31:04.234986    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:04.237980    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:04.277225    1436 logs.go:282] 0 containers: []
	W1210 07:31:04.277295    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:04.282194    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:04.318974    1436 logs.go:282] 0 containers: []
	W1210 07:31:04.318974    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:04.323161    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:04.353068    1436 logs.go:282] 0 containers: []
	W1210 07:31:04.353068    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:04.356071    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:04.397751    1436 logs.go:282] 0 containers: []
	W1210 07:31:04.397751    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:04.397751    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:04.397751    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:04.446292    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:04.447301    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:04.535610    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:04.522529    4551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:04.524016    4551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:04.525634    4551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:04.526560    4551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:04.529694    4551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:04.522529    4551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:04.524016    4551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:04.525634    4551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:04.526560    4551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:04.529694    4551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:04.535610    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:04.535610    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:04.566823    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:04.566823    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:04.621745    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:04.622268    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:07.189561    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:07.217658    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:07.231130    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:31:07.255205    1436 logs.go:282] 0 containers: []
	W1210 07:31:07.255205    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:07.259207    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:07.307203    1436 logs.go:282] 0 containers: []
	W1210 07:31:07.307203    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:31:07.307203    1436 logs.go:284] No container was found matching "etcd"
	W1210 07:31:07.307203    1436 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:31:07.310198    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:07.342205    1436 logs.go:282] 0 containers: []
	W1210 07:31:07.342205    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:07.345199    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:07.373312    1436 logs.go:282] 0 containers: []
	W1210 07:31:07.373312    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:07.377150    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:07.414643    1436 logs.go:282] 0 containers: []
	W1210 07:31:07.414643    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:07.418938    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:07.449748    1436 logs.go:282] 0 containers: []
	W1210 07:31:07.449748    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:07.453744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:07.480722    1436 logs.go:282] 0 containers: []
	W1210 07:31:07.480722    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:07.486360    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:07.516941    1436 logs.go:282] 0 containers: []
	W1210 07:31:07.516941    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:07.516941    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:07.516941    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:07.589917    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:07.589917    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:07.625846    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:07.625846    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:07.715527    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:07.704421    4731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:07.705698    4731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:07.706414    4731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:07.708649    4731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:07.709925    4731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:07.704421    4731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:07.705698    4731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:07.706414    4731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:07.708649    4731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:07.709925    4731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:07.715527    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:07.715527    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:07.751534    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:07.751534    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:10.312462    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:10.343642    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:10.390416    1436 logs.go:282] 0 containers: []
	W1210 07:31:10.390416    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:10.394945    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:10.427860    1436 logs.go:282] 0 containers: []
	W1210 07:31:10.427860    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:10.431283    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:10.462388    1436 logs.go:282] 0 containers: []
	W1210 07:31:10.462388    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:10.466085    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:10.502138    1436 logs.go:282] 0 containers: []
	W1210 07:31:10.502138    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:10.505762    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:10.546559    1436 logs.go:282] 0 containers: []
	W1210 07:31:10.546595    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:10.550452    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:10.588451    1436 logs.go:282] 0 containers: []
	W1210 07:31:10.588451    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:10.593923    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:10.628178    1436 logs.go:282] 0 containers: []
	W1210 07:31:10.628235    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:10.632106    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:10.664835    1436 logs.go:282] 0 containers: []
	W1210 07:31:10.664835    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:10.664835    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:10.664835    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:10.706554    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:10.706554    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:10.762364    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:10.762364    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:10.845812    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:10.845812    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:10.892099    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:10.892099    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:11.006866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:10.996537    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:10.997679    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:10.998447    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:11.000711    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:11.001582    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:10.996537    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:10.997679    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:10.998447    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:11.000711    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:11.001582    4926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:13.511844    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:13.533956    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:13.570796    1436 logs.go:282] 0 containers: []
	W1210 07:31:13.570796    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:13.574527    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:13.604923    1436 logs.go:282] 0 containers: []
	W1210 07:31:13.604923    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:13.609046    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:13.639223    1436 logs.go:282] 0 containers: []
	W1210 07:31:13.639280    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:13.644033    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:13.680464    1436 logs.go:282] 0 containers: []
	W1210 07:31:13.680464    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:13.683454    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:13.710312    1436 logs.go:282] 0 containers: []
	W1210 07:31:13.710312    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:13.716691    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:13.752471    1436 logs.go:282] 0 containers: []
	W1210 07:31:13.752560    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:13.759163    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:13.799571    1436 logs.go:282] 0 containers: []
	W1210 07:31:13.799571    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:13.805230    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:13.835026    1436 logs.go:282] 0 containers: []
	W1210 07:31:13.835026    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:13.835026    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:13.835026    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:13.955025    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:13.942293    5063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:13.944894    5063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:13.946216    5063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:13.947778    5063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:13.948328    5063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:13.942293    5063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:13.944894    5063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:13.946216    5063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:13.947778    5063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:13.948328    5063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:13.955025    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:13.955025    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:13.982022    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:13.982022    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:14.038040    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:14.038040    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:14.099033    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:14.099033    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:16.639877    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:16.668130    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:16.706306    1436 logs.go:282] 0 containers: []
	W1210 07:31:16.706306    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:16.711303    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:16.739302    1436 logs.go:282] 0 containers: []
	W1210 07:31:16.739302    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:16.742317    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:16.774305    1436 logs.go:282] 0 containers: []
	W1210 07:31:16.774305    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:16.778304    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:16.811303    1436 logs.go:282] 0 containers: []
	W1210 07:31:16.811303    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:16.817306    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:16.850312    1436 logs.go:282] 0 containers: []
	W1210 07:31:16.850312    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:16.854316    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:16.894315    1436 logs.go:282] 0 containers: []
	W1210 07:31:16.894315    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:16.897316    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:16.950349    1436 logs.go:282] 0 containers: []
	W1210 07:31:16.950349    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:16.954552    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:16.990903    1436 logs.go:282] 0 containers: []
	W1210 07:31:16.990903    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:16.990903    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:16.990903    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:17.042389    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:17.042389    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:17.108564    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:17.109087    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:17.145052    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:17.145052    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:17.239440    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:17.225003    5261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:17.227844    5261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:17.231030    5261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:17.232386    5261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:17.233997    5261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:17.225003    5261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:17.227844    5261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:17.231030    5261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:17.232386    5261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:17.233997    5261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:17.239498    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:17.239498    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:19.773109    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:19.801128    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:19.861106    1436 logs.go:282] 0 containers: []
	W1210 07:31:19.861106    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:19.866107    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:19.904118    1436 logs.go:282] 0 containers: []
	W1210 07:31:19.904118    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:19.909120    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:19.959114    1436 logs.go:282] 0 containers: []
	W1210 07:31:19.959114    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:19.963114    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:19.991112    1436 logs.go:282] 0 containers: []
	W1210 07:31:19.991112    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:19.995106    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:20.028115    1436 logs.go:282] 0 containers: []
	W1210 07:31:20.028115    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:20.031108    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:20.065123    1436 logs.go:282] 0 containers: []
	W1210 07:31:20.065123    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:20.068114    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:20.100073    1436 logs.go:282] 0 containers: []
	W1210 07:31:20.100073    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:20.103070    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:20.137091    1436 logs.go:282] 0 containers: []
	W1210 07:31:20.138075    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:20.138075    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:20.138075    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:20.200080    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:20.200080    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:20.236070    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:20.236070    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 07:31:20.263157    1436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1210 07:31:20.367821    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:20.357542    5415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:20.358854    5415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:20.360152    5415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:20.361261    5415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:20.362188    5415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:20.357542    5415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:20.358854    5415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:20.360152    5415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:20.361261    5415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:20.362188    5415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:20.367821    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:20.367821    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1210 07:31:20.374805    1436 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:31:20.374805    1436 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:31:20.376817    1436 out.go:179] * Enabled addons: 
	I1210 07:31:20.379812    1436 addons.go:530] duration metric: took 1m38.3806831s for enable addons: enabled=[]
	I1210 07:31:20.396814    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:20.396814    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:22.956210    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:22.978218    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:23.011224    1436 logs.go:282] 0 containers: []
	W1210 07:31:23.011224    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:23.015216    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:23.044219    1436 logs.go:282] 0 containers: []
	W1210 07:31:23.044219    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:23.047211    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:23.080218    1436 logs.go:282] 0 containers: []
	W1210 07:31:23.080218    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:23.084217    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:23.118218    1436 logs.go:282] 0 containers: []
	W1210 07:31:23.118218    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:23.122231    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:23.151215    1436 logs.go:282] 0 containers: []
	W1210 07:31:23.151215    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:23.155213    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:23.185222    1436 logs.go:282] 0 containers: []
	W1210 07:31:23.185222    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:23.189223    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:23.227219    1436 logs.go:282] 0 containers: []
	W1210 07:31:23.227219    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:23.230224    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:23.266237    1436 logs.go:282] 0 containers: []
	W1210 07:31:23.266237    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:23.266237    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:23.266237    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:23.297214    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:23.297214    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:23.357233    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:23.357233    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:23.421229    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:23.421229    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:23.460218    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:23.460218    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:23.544413    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:23.535582    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.536730    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.537358    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.539749    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.540912    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:23.535582    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.536730    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.537358    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.539749    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.540912    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:26.050161    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:26.077105    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:26.111827    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.111827    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:26.116713    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:26.160114    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.160114    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:26.163744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:26.201139    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.201139    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:26.204831    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:26.240411    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.240462    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:26.244533    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:26.280463    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.280463    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:26.285443    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:26.317450    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.317450    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:26.320454    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:26.356058    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.356058    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:26.360642    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:26.406955    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.406994    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:26.407032    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:26.407032    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:26.486801    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:26.486845    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:26.525844    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:26.525844    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:26.629730    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:26.619679    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.620896    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.621633    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623054    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623677    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:26.619679    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.620896    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.621633    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623054    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623677    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:26.630733    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:26.630733    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:26.786973    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:26.786973    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:29.397246    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:29.477876    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:29.605797    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.605797    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:29.612110    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:29.728807    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.728807    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:29.734404    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:29.836328    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.836328    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:29.841346    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:29.932721    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.933712    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:29.938725    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:30.029301    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.029301    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:30.034503    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:30.132157    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.132157    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:30.137284    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:30.276443    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.276443    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:30.284280    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:30.440215    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.440215    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:30.440215    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:30.440215    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:30.586863    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:30.586863    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:30.654056    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:30.654056    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:30.825025    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:30.810195    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.812029    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.813542    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.815272    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.818214    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:30.810195    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.812029    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.813542    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.815272    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.818214    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:30.825083    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:30.825083    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:30.883913    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:30.883913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:33.522798    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:33.542801    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:33.574796    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.574796    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:33.577799    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:33.609805    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.609805    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:33.613806    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:33.647528    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.647528    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:33.650525    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:33.682527    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.683531    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:33.686536    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:33.715528    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.715528    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:33.718520    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:33.752522    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.752522    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:33.755526    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:33.789961    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.789961    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:33.794804    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:33.824805    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.824805    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:33.824805    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:33.824805    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:33.908771    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:33.908771    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:33.958763    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:33.958763    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:34.080194    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:34.067865    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.069363    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.070689    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.072220    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.073030    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:34.067865    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.069363    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.070689    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.072220    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.073030    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:34.080194    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:34.080194    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:34.114208    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:34.114208    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:36.683658    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:36.704830    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:36.739690    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.739690    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:36.742694    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:36.772249    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.772249    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:36.776265    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:36.812803    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.812803    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:36.816811    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:36.849259    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.849259    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:36.852518    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:36.890605    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.890605    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:36.895610    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:36.937605    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.937605    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:36.942601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:36.979599    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.979599    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:36.984601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:37.022606    1436 logs.go:282] 0 containers: []
	W1210 07:31:37.022606    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:37.022606    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:37.022606    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:37.086612    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:37.086612    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:37.128602    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:37.128602    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:37.225605    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:37.215773    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.216591    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.218566    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.219365    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.221626    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:37.215773    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.216591    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.218566    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.219365    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.221626    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:37.225605    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:37.225605    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:37.254615    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:37.254615    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:39.808959    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:39.828946    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:39.859949    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.859949    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:39.862944    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:39.896961    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.896961    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:39.901952    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:39.936950    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.936950    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:39.939955    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:39.969949    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.969949    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:39.972954    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:40.002949    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.002949    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:40.006946    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:40.036957    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.036957    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:40.039947    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:40.098959    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.098959    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:40.102955    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:40.149957    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.149957    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:40.149957    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:40.149957    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:40.191850    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:40.192845    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:40.293665    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:40.277190    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283148    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283982    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.286428    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.287149    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:40.277190    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283148    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283982    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.286428    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.287149    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:40.293665    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:40.293665    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:40.325883    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:40.325883    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:40.379885    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:40.379885    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:42.947835    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:42.966833    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:43.000857    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.000857    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:43.003835    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:43.034830    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.034830    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:43.037843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:43.069836    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.069836    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:43.073842    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:43.105424    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.105465    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:43.109492    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:43.143411    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.143411    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:43.147409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:43.179168    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.179168    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:43.183167    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:43.211281    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.211281    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:43.214141    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:43.248141    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.248141    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:43.248141    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:43.248141    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:43.314876    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:43.314876    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:43.357233    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:43.357233    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:43.451546    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:43.441909    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443033    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443856    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.446062    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.447664    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:43.441909    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443033    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443856    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.446062    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.447664    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:43.452560    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:43.452560    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:43.479539    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:43.479539    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:46.056731    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:46.081601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:46.111531    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.111531    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:46.116512    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:46.149808    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.149808    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:46.155807    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:46.190791    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.190791    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:46.193789    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:46.232109    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.232109    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:46.235109    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:46.269122    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.269122    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:46.273122    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:46.302130    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.302130    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:46.306119    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:46.338110    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.338110    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:46.341114    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:46.370305    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.370305    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:46.370305    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:46.370305    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:46.438787    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:46.438787    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:46.605791    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:46.605791    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:46.756762    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:46.747167    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.748310    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.749473    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.750856    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.751642    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:46.747167    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.748310    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.749473    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.750856    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.751642    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:46.756762    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:46.756762    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:46.793764    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:46.793764    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:49.381174    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:49.403703    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:49.436264    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.436317    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:49.440617    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:49.468917    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.468982    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:49.472677    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:49.499977    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.499977    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:49.504116    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:49.536309    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.536350    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:49.540463    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:49.568274    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.568274    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:49.572177    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:49.600130    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.600130    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:49.604000    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:49.632645    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.632645    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:49.636092    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:49.667017    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.667017    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:49.667017    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:49.667017    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:49.705515    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:49.705515    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:49.790780    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:49.782366    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.783658    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.784961    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.786128    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.787161    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:49.782366    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.783658    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.784961    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.786128    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.787161    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:49.790780    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:49.790780    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:49.817781    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:49.817781    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:49.871600    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:49.871674    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:52.448511    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:52.475325    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:52.506360    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.506360    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:52.510172    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:52.540147    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.540147    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:52.544437    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:52.575774    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.575774    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:52.579336    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:52.610061    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.610061    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:52.613342    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:52.642765    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.642765    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:52.649215    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:52.678701    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.678701    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:52.682526    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:52.710203    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.710203    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:52.715870    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:52.745326    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.745351    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:52.745351    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:52.745397    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:52.811401    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:52.811401    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:52.853138    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:52.853138    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:52.968335    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:52.959589    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.960835    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.961833    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.962872    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.963541    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:52.959589    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.960835    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.961833    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.962872    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.963541    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:52.968335    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:52.968335    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:52.995279    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:52.995802    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:55.548093    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:55.571449    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:55.603901    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.603970    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:55.607695    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:55.639065    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.639065    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:55.643536    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:55.671930    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.671930    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:55.675998    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:55.704460    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.704460    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:55.708947    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:55.739257    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.739257    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:55.742852    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:55.772295    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.772344    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:55.776423    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:55.803812    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.803812    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:55.809849    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:55.841586    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.841647    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:55.841647    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:55.841647    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:55.916368    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:55.916368    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:55.958653    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:55.958653    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:56.055702    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:56.041972    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.043936    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.045926    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.047763    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.048444    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:56.041972    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.043936    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.045926    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.047763    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.048444    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:56.055702    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:56.055702    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:56.084883    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:56.084883    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:58.642350    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:58.668189    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:58.699633    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.699633    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:58.705036    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:58.738553    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.738553    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:58.742579    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:58.772414    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.772414    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:58.775757    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:58.804872    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.804872    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:58.808509    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:58.835398    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.835398    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:58.843124    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:58.871465    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.871465    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:58.875535    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:58.905029    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.905108    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:58.910324    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:58.953100    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.953100    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:58.953100    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:58.953100    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:59.012946    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:59.012946    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:59.052964    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:59.052964    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:59.146228    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:59.133962    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.135271    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.136467    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.137230    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.139872    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:59.133962    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.135271    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.136467    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.137230    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.139872    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:59.146228    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:59.146228    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:59.173200    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:59.173200    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:01.725170    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:01.746739    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:01.779670    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.779670    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:01.783967    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:01.812617    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.812617    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:01.817482    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:01.848083    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.848083    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:01.852344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:01.883648    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.883648    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:01.887655    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:01.918403    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.918403    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:01.922409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:01.961721    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.961721    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:01.969744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:01.998302    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.998302    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:02.003804    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:02.032315    1436 logs.go:282] 0 containers: []
	W1210 07:32:02.032315    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:02.032315    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:02.032315    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:02.096900    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:02.096900    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:02.136137    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:02.136137    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:02.227732    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:02.216961    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.218197    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.219273    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.220321    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.221438    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:02.216961    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.218197    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.219273    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.220321    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.221438    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:02.227732    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:02.227732    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:02.255236    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:02.255236    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:04.810034    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:04.838035    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:04.888039    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.888039    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:04.892025    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:04.955032    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.955032    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:04.959038    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:04.995031    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.995031    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:04.999034    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:05.035036    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.035036    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:05.040047    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:05.079034    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.079034    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:05.084038    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:05.123032    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.123032    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:05.128035    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:05.165033    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.165033    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:05.169028    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:05.205183    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.205183    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:05.205183    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:05.205183    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:05.248358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:05.248358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:05.349366    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:05.339165    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.340548    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.341613    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.342697    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.343919    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:05.339165    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.340548    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.341613    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.342697    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.343919    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:05.349366    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:05.349366    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:05.384377    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:05.384377    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:05.439383    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:05.439383    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:08.021198    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:08.045549    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:08.076568    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.076568    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:08.082429    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:08.113514    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.113514    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:08.117280    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:08.145243    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.145243    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:08.151846    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:08.182475    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.182475    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:08.186570    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:08.214500    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.214554    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:08.218698    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:08.250229    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.250229    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:08.254493    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:08.298394    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.298394    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:08.302457    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:08.331561    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.331561    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:08.331561    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:08.331561    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:08.368913    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:08.368913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:08.453343    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:08.442562    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.443722    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.445918    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.447466    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.448822    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:08.442562    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.443722    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.445918    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.447466    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.448822    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:08.453378    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:08.453417    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:08.488219    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:08.488219    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:08.533777    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:08.533777    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:11.100898    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:11.123310    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:11.154369    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.154369    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:11.158211    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:11.188349    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.188419    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:11.191999    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:11.218233    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.218263    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:11.222177    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:11.248157    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.248157    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:11.252075    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:11.280934    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.280934    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:11.284871    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:11.316173    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.316225    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:11.320150    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:11.350432    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.350494    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:11.354282    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:11.381767    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.381819    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:11.381819    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:11.381874    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:11.447079    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:11.447079    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:11.485987    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:11.485987    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:11.568313    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:11.555927    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.557482    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.559214    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.561941    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.562681    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:11.555927    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.557482    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.559214    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.561941    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.562681    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:11.568365    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:11.568408    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:11.599474    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:11.599518    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:14.165429    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:14.189363    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:14.220411    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.220478    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:14.223878    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:14.253748    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.253798    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:14.257409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:14.288235    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.288235    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:14.291689    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:14.323349    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.323349    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:14.326680    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:14.355227    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.355227    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:14.358704    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:14.389648    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.389648    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:14.393032    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:14.424212    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.424212    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:14.427425    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:14.457834    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.457834    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:14.457834    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:14.457834    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:14.486053    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:14.486053    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:14.538138    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:14.538138    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:14.601542    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:14.601542    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:14.638885    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:14.638885    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:14.724482    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:14.715398    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.716451    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.717228    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.719965    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.720942    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:14.715398    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.716451    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.717228    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.719965    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.720942    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:17.229775    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:17.254115    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:17.287113    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.287113    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:17.292389    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:17.321661    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.321661    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:17.325615    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:17.360140    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.360140    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:17.366346    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:17.402963    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.402963    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:17.406830    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:17.436210    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.436210    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:17.440638    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:17.468315    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.468315    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:17.473002    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:17.516057    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.516057    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:17.519835    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:17.546705    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.546705    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:17.546705    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:17.546705    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:17.575272    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:17.575272    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:17.635882    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:17.635882    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:17.702984    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:17.702984    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:17.738444    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:17.738444    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:17.826329    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:17.816355    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.817532    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.818909    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.821510    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.822595    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:17.816355    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.817532    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.818909    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.821510    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.822595    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:20.331491    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:20.356562    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:20.393733    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.393733    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:20.397542    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:20.424969    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.424969    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:20.430097    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:20.461163    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.461163    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:20.464553    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:20.496041    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.496041    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:20.500386    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:20.528481    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.528481    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:20.533192    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:20.563678    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.563678    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:20.567914    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:20.595909    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.595909    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:20.601427    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:20.633125    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.633125    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:20.633125    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:20.633125    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:20.698742    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:20.698742    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:20.738675    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:20.738675    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:20.832925    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:20.823171    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.824395    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.825664    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.826500    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.828755    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:20.823171    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.824395    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.825664    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.826500    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.828755    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:20.833019    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:20.833050    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:20.863741    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:20.863802    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:23.424742    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:23.449719    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:23.484921    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.484982    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:23.488818    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:23.520632    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.520718    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:23.525648    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:23.557856    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.557856    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:23.561789    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:23.593782    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.593782    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:23.596770    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:23.629689    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.629689    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:23.633972    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:23.677648    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.677648    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:23.681665    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:23.708735    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.708735    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:23.712484    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:23.742324    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.742324    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:23.742324    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:23.742324    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:23.809315    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:23.809315    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:23.849820    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:23.849820    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:23.932812    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:23.923514    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.925667    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.926732    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.928140    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.929123    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:23.923514    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.925667    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.926732    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.928140    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.929123    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:23.932860    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:23.932896    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:23.962977    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:23.962977    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:26.517198    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:26.545066    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:26.577323    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.577323    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:26.581824    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:26.621178    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.621178    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:26.624162    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:26.657711    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.657711    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:26.661872    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:26.690869    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.690869    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:26.693873    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:26.720949    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.720949    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:26.724289    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:26.757254    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.757254    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:26.761433    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:26.788617    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.788617    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:26.792015    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:26.820229    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.820229    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:26.820229    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:26.820229    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:26.886805    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:26.886805    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:26.926531    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:26.926531    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:27.014343    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:27.001829    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.003812    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.004706    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.007231    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.008445    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:27.001829    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.003812    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.004706    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.007231    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.008445    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:27.014420    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:27.014490    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:27.043375    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:27.043375    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:29.599594    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:29.627372    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:29.659982    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.659982    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:29.662983    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:29.694702    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.694702    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:29.700318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:29.732602    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.732602    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:29.735594    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:29.769602    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.769602    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:29.773601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:29.805199    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.805199    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:29.808179    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:29.838578    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.838578    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:29.843641    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:29.878051    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.878051    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:29.881052    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:29.921782    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.921782    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:29.921782    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:29.921782    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:29.991328    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:29.991328    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:30.030358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:30.031358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:30.117974    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:30.107541    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.108605    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.109589    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.110674    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.111579    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:30.107541    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.108605    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.109589    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.110674    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.111579    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:30.118027    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:30.118027    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:30.147934    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:30.147934    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:32.704372    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:32.727813    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:32.762114    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.762228    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:32.767248    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:32.801905    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.801968    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:32.805939    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:32.836433    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.836579    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:32.840369    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:32.870265    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.870265    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:32.874049    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:32.904540    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.904540    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:32.908658    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:32.937325    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.937407    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:32.941191    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:32.974829    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.974893    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:32.980307    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:33.012207    1436 logs.go:282] 0 containers: []
	W1210 07:32:33.012268    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:33.012288    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:33.012288    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:33.062151    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:33.062151    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:33.126084    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:33.126084    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:33.164564    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:33.164564    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:33.252175    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:33.238911    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.239860    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.243396    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.244685    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.245447    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:33.238911    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.239860    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.243396    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.244685    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.245447    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:33.252175    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:33.252175    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:35.789401    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:35.810140    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:35.846049    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.846049    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:35.850173    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:35.881840    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.881840    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:35.884841    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:35.913190    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.913190    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:35.916698    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:35.953160    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.953160    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:35.956661    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:35.990725    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.990725    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:35.994362    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:36.027153    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.027153    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:36.031157    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:36.060142    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.060142    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:36.063139    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:36.096214    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.096291    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:36.096291    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:36.096291    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:36.136455    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:36.136455    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:36.228827    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:36.215892    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.217040    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.218413    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.219992    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.221006    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:36.215892    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.217040    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.218413    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.219992    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.221006    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:36.228910    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:36.228944    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:36.260979    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:36.261040    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:36.321946    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:36.321946    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:38.893525    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:38.918010    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:38.951682    1436 logs.go:282] 0 containers: []
	W1210 07:32:38.951682    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:38.954817    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:38.986714    1436 logs.go:282] 0 containers: []
	W1210 07:32:38.986714    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:38.992805    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:39.024242    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.024242    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:39.028333    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:39.057504    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.057504    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:39.063178    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:39.093362    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.093362    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:39.097488    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:39.130652    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.130690    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:39.133596    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:39.163556    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.163556    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:39.168915    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:39.202587    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.202587    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:39.202587    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:39.202587    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:39.268647    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:39.268647    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:39.308297    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:39.308297    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:39.438181    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:39.395105    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.396191    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.398354    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.399763    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.401460    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:39.395105    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.396191    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.398354    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.399763    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.401460    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:39.438181    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:39.438181    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:39.467128    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:39.467176    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:42.023591    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:42.047765    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:42.080166    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.080166    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:42.084928    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:42.114905    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.114905    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:42.118820    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:42.148212    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.148212    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:42.151728    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:42.182256    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.182256    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:42.185843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:42.216232    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.216276    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:42.219555    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:42.249214    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.249214    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:42.253469    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:42.281977    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.281977    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:42.285971    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:42.313212    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.314210    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:42.314210    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:42.314210    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:42.382226    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:42.382226    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:42.424358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:42.424358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:42.509116    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:42.500360    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.501554    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.503040    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.504307    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.505418    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:42.500360    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.501554    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.503040    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.504307    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.505418    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:42.509116    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:42.509116    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:42.536096    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:42.536096    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:45.087059    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:45.110662    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:45.142133    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.142133    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:45.146341    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:45.178232    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.178232    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:45.182428    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:45.211507    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.211507    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:45.215400    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:45.245805    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.246346    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:45.251790    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:45.299793    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.299793    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:45.304394    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:45.332689    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.332689    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:45.338438    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:45.371989    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.372039    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:45.376951    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:45.411498    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.411558    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:45.411558    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:45.411617    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:45.488591    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:45.489591    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:45.529135    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:45.529135    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:45.627238    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:45.616715    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.617907    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.618773    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.622470    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.623968    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:45.616715    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.617907    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.618773    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.622470    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.623968    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:45.627238    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:45.627238    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:45.659505    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:45.659505    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:48.224164    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:48.247748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:48.276146    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.276253    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:48.279224    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:48.307561    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.307587    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:48.311247    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:48.342268    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.342268    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:48.346481    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:48.379504    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.379504    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:48.384265    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:48.417490    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.417490    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:48.420482    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:48.463340    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.463340    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:48.466961    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:48.498101    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.498101    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:48.501771    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:48.532099    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.532099    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:48.532099    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:48.532099    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:48.612165    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:48.602526   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.604459   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.605839   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.608058   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.609316   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:48.602526   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.604459   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.605839   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.608058   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.609316   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:48.612165    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:48.612165    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:48.639467    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:48.639467    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:48.708307    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:48.708378    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:48.769132    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:48.769193    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:51.313991    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:51.338965    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:51.379596    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.379666    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:51.384637    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:51.439084    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.439084    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:51.443082    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:51.481339    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.481375    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:51.485798    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:51.515086    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.515086    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:51.519086    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:51.549657    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.549745    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:51.553762    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:51.594636    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.594636    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:51.601112    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:51.634850    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.634897    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:51.638417    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:51.668658    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.668658    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:51.668658    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:51.668658    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:51.743421    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:51.743421    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:51.785980    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:51.785980    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:51.881612    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:51.875319   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.876439   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.877390   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.878342   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.879220   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:51.875319   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.876439   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.877390   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.878342   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.879220   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:51.881612    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:51.881612    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:51.915211    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:51.915211    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:54.477323    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:54.503322    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:54.543324    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.543324    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:54.547318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:54.584329    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.584329    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:54.588316    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:54.620313    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.620313    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:54.623313    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:54.656331    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.656331    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:54.662335    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:54.698319    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.698319    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:54.702320    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:54.730323    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.730323    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:54.734335    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:54.767329    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.767329    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:54.772326    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:54.807324    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.807324    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:54.807324    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:54.807324    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:54.885116    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:54.885116    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:54.922078    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:54.922078    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:55.025433    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:55.017851   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.018872   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.019791   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.020881   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.021812   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:55.017851   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.018872   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.019791   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.020881   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.021812   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:55.025433    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:55.025433    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:55.062949    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:55.062949    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:57.627400    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:57.652685    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:57.682605    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.682695    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:57.687397    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:57.715588    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.715643    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:57.719155    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:57.746386    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.746433    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:57.751074    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:57.786162    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.786225    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:57.790161    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:57.821543    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.821543    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:57.825865    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:57.854873    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.854873    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:57.858370    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:57.908764    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.908764    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:57.912923    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:57.943110    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.943156    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:57.943156    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:57.943220    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:58.044764    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:58.032727   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.034310   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.035457   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.038242   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.039578   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:58.032727   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.034310   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.035457   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.038242   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.039578   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:58.044764    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:58.044764    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:58.074136    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:58.074136    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:58.130739    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:58.130739    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:58.198319    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:58.198319    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:00.746286    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:00.773024    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:00.801991    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.801991    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:00.806103    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:00.839474    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.839538    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:00.843748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:00.872704    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.872704    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:00.879471    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:00.910099    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.910099    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:00.913675    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:00.942535    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.942587    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:00.946706    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:00.978075    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.978075    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:00.981585    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:01.010831    1436 logs.go:282] 0 containers: []
	W1210 07:33:01.010862    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:01.014542    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:01.046630    1436 logs.go:282] 0 containers: []
	W1210 07:33:01.046630    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:01.046630    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:01.046630    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:01.110794    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:01.110794    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:01.152129    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:01.152129    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:01.244044    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:01.232452   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.233728   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.234672   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.238201   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.239249   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:01.232452   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.233728   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.234672   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.238201   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.239249   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:01.244044    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:01.244044    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:01.278465    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:01.278465    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:03.833114    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:03.855801    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:03.886510    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.886573    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:03.890099    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:03.920839    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.920839    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:03.927061    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:03.956870    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.956870    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:03.960568    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:03.992698    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.992784    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:03.996483    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:04.027029    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.027149    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:04.030240    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:04.063615    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.063615    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:04.067578    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:04.097874    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.097921    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:04.102194    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:04.133751    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.133751    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:04.133751    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:04.133751    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:04.200457    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:04.200457    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:04.240408    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:04.240408    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:04.321404    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:04.310792   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.311874   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.312796   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.314967   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.316599   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:04.310792   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.311874   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.312796   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.314967   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.316599   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:04.321404    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:04.321404    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:04.348691    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:04.348788    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:06.910838    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:06.942433    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:06.977118    1436 logs.go:282] 0 containers: []
	W1210 07:33:06.977156    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:06.981007    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:07.010984    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.010984    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:07.015418    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:07.044766    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.044766    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:07.048710    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:07.081347    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.081347    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:07.085264    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:07.120524    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.120524    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:07.125158    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:07.162231    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.162231    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:07.167511    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:07.199783    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.199783    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:07.203843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:07.237945    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.237945    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:07.237945    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:07.237945    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:07.303014    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:07.303014    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:07.339790    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:07.339790    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:07.433533    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:07.422610   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.423608   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.426487   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.427518   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.428665   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:07.422610   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.423608   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.426487   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.427518   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.428665   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:07.433578    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:07.433622    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:07.463534    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:07.463534    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:10.019483    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:10.042553    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:10.075861    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.075861    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:10.079883    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:10.112806    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.112855    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:10.118076    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:10.149529    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.149529    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:10.154764    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:10.183943    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.183943    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:10.188277    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:10.225075    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.225109    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:10.229148    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:10.258752    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.258831    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:10.262260    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:10.290375    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.290375    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:10.294114    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:10.324184    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.324184    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:10.324184    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:10.324257    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:10.389060    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:10.389060    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:10.428762    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:10.428762    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:10.512419    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:10.502106   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.503175   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.504155   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.507624   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.508833   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:10.502106   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.503175   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.504155   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.507624   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.508833   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:10.512419    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:10.512419    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:10.539151    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:10.539151    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:13.096376    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:13.120463    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:13.154821    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.154821    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:13.158241    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:13.186136    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.186172    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:13.190126    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:13.217850    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.217850    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:13.220856    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:13.254422    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.254422    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:13.258405    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:13.290565    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.290650    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:13.294141    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:13.324205    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.324205    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:13.327944    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:13.359148    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.359148    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:13.363435    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:13.394783    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.394783    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:13.394783    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:13.394783    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:13.472122    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:13.472122    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:13.512554    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:13.512554    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:13.606866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:13.598440   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.599732   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.600869   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.602040   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.603324   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:13.598440   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.599732   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.600869   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.602040   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.603324   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:13.606866    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:13.606866    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:13.640509    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:13.640509    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:16.200969    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:16.227853    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:16.259466    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.259503    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:16.263863    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:16.305661    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.305714    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:16.309344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:16.349702    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.349702    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:16.354239    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:16.389642    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.389669    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:16.393404    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:16.422749    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.422749    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:16.428043    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:16.462871    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.462871    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:16.466863    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:16.500036    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.500036    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:16.505217    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:16.545533    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.545563    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:16.545563    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:16.545640    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:16.616718    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:16.616718    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:16.662358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:16.662414    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:16.771496    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:16.759784   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.760601   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.764427   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.766044   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.767471   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:16.759784   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.760601   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.764427   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.766044   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.767471   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:16.771539    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:16.771539    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:16.802169    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:16.802169    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:19.361839    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:19.384627    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:19.418054    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.418054    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:19.423334    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:19.449315    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.450326    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:19.453336    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:19.479318    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.479318    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:19.483409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:19.515568    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.515568    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:19.518948    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:19.547403    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.547403    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:19.550914    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:19.582586    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.582643    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:19.586506    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:19.617655    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.617655    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:19.621651    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:19.653692    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.653797    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:19.653820    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:19.653820    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:19.720756    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:19.720756    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:19.788168    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:19.788168    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:19.825175    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:19.825175    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:19.937176    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:19.910996   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912147   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912626   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.933700   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.934740   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:19.910996   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912147   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912626   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.933700   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.934740   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:19.938191    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:19.938191    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:22.472081    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:22.499318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:22.535642    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.535642    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:22.540234    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:22.575580    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.575580    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:22.578579    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:22.611585    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.612584    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:22.615587    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:22.645600    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.645600    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:22.649593    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:22.680588    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.680588    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:22.684584    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:22.713587    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.713587    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:22.716592    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:22.745591    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.745591    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:22.748591    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:22.777133    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.777133    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:22.777133    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:22.777133    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:22.866913    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:22.856823   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.858179   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.859339   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860428   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860817   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:22.856823   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.858179   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.859339   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860428   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860817   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:22.866913    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:22.866913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:22.895817    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:22.895817    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:22.963449    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:22.964449    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:23.024022    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:23.024022    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:25.581257    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:25.606450    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:25.638465    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.638465    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:25.641459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:25.675461    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.675461    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:25.678460    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:25.712472    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.712472    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:25.715460    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:25.742469    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.742469    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:25.745459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:25.778468    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.778468    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:25.782466    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:25.810470    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.810470    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:25.813459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:25.842959    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.843962    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:25.846951    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:25.879265    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.879265    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:25.879265    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:25.879265    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:25.923140    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:25.923140    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:26.006825    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:25.994746   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.996044   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.997646   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.998827   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:26.000023   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:25.994746   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.996044   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.997646   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.998827   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:26.000023   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:26.006825    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:26.006825    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:26.036172    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:26.036172    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:26.088180    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:26.088180    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:28.665087    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:28.689823    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:28.725678    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.725714    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:28.728663    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:28.759105    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.759146    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:28.763209    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:28.794743    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.794743    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:28.798927    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:28.832979    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.832979    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:28.836972    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:28.869676    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.869676    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:28.874394    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:28.909690    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.909690    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:28.914703    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:28.948685    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.948685    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:28.951687    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:28.983688    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.983688    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:28.983688    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:28.983688    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:29.038702    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:29.038702    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:29.102687    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:29.102687    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:29.157695    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:29.157695    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:29.254070    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:29.238189   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.239080   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.244117   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.246991   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.248197   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:29.238189   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.239080   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.244117   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.246991   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.248197   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:29.254070    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:29.254070    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:31.790873    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:31.815324    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:31.848719    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.848719    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:31.853126    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:31.894569    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.894618    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:31.901660    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:31.945924    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.945924    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:31.949930    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:31.980922    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.980922    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:31.983920    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:32.015920    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.015920    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:32.018924    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:32.055014    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.055014    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:32.059907    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:32.088299    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.088299    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:32.091301    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:32.122373    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.122373    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:32.122373    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:32.122373    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:32.200241    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:32.200241    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:32.235857    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:32.236857    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:32.346052    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:32.333533   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.334563   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.336148   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.337055   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.339822   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:32.333533   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.334563   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.336148   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.337055   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.339822   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:32.346052    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:32.346052    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:32.374360    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:32.374360    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:34.931799    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:34.953865    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:34.983147    1436 logs.go:282] 0 containers: []
	W1210 07:33:34.983147    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:34.986833    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:35.017888    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.017888    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:35.021662    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:35.051231    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.051231    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:35.055612    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:35.089316    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.089316    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:35.093193    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:35.121682    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.121682    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:35.126091    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:35.158874    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.158874    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:35.165874    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:35.201117    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.201117    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:35.206353    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:35.236228    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.236228    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:35.236228    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:35.236228    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:35.267932    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:35.267994    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:35.320951    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:35.320951    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:35.383537    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:35.383589    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:35.425468    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:35.425468    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:35.528144    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:35.516901   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.517862   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.520128   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.521034   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.522130   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:35.516901   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.517862   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.520128   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.521034   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.522130   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:38.032492    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:38.054909    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:38.083957    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.083957    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:38.087695    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:38.116008    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.116008    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:38.121353    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:38.151236    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.151236    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:38.157561    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:38.191692    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.191739    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:38.195638    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:38.232952    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.232952    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:38.240283    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:38.267392    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.267392    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:38.270392    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:38.302982    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.302982    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:38.306527    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:38.337370    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.337370    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:38.337663    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:38.337663    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:38.378149    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:38.378149    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:38.496679    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:38.485115   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.486129   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.488286   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.489938   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.491114   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:38.485115   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.486129   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.488286   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.489938   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.491114   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:38.496679    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:38.496679    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:38.523508    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:38.524031    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:38.575827    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:38.575926    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:41.142591    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:41.169193    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:41.202128    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.202197    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:41.205840    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:41.232108    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.232108    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:41.236042    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:41.266240    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.266240    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:41.270256    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:41.299391    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.299914    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:41.305198    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:41.334815    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.334888    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:41.338221    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:41.366830    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.366830    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:41.371846    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:41.403239    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.403307    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:41.406504    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:41.435444    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.435507    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:41.435507    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:41.435507    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:41.495280    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:41.495280    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:41.540098    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:41.540098    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:41.631123    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:41.619480   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.620700   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.621495   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.624397   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.626223   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:41.619480   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.620700   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.621495   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.624397   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.626223   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:41.631123    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:41.631123    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:41.659481    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:41.660004    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:44.218114    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:44.245684    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:44.277948    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.277948    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:44.281784    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:44.308191    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.308236    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:44.311628    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:44.338002    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.338064    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:44.341334    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:44.369051    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.369051    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:44.373446    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:44.401355    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.401355    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:44.404625    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:44.435928    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.436021    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:44.438720    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:44.468518    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.468518    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:44.472419    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:44.505185    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.505185    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:44.505185    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:44.505185    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:44.542000    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:44.542000    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:44.637866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:44.628159   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.629590   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.630676   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.631884   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.633066   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:44.628159   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.629590   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.630676   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.631884   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.633066   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:44.637866    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:44.637866    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:44.668149    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:44.668149    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:44.722118    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:44.722118    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:47.287165    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:47.315701    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:47.348691    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.348691    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:47.352599    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:47.382757    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.382757    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:47.386956    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:47.416756    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.416756    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:47.420505    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:47.447567    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.447631    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:47.451327    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:47.481198    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.481198    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:47.484905    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:47.515752    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.515752    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:47.519521    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:47.549878    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.549878    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:47.553160    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:47.580738    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.580738    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:47.580738    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:47.580738    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:47.620996    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:47.620996    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:47.717751    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:47.708186   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.709887   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.710982   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.712203   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.713847   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:47.708186   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.709887   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.710982   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.712203   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.713847   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:47.717751    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:47.717751    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:47.747052    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:47.747052    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:47.806827    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:47.806907    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:50.374572    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:50.402608    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:50.434845    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.434845    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:50.439264    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:50.472884    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.472884    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:50.476675    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:50.506875    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.506875    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:50.510516    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:50.544104    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.544104    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:50.547823    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:50.582563    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.582563    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:50.586716    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:50.617520    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.617520    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:50.621651    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:50.654870    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.654924    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:50.658739    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:50.687650    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.687650    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:50.687650    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:50.687650    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:50.741903    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:50.741970    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:50.801979    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:50.801979    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:50.841061    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:50.841061    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:50.929313    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:50.919053   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.920402   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.921292   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.924075   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.925956   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:50.919053   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.920402   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.921292   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.924075   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.925956   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:50.929313    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:50.929313    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:53.461932    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:53.489152    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:53.525676    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.525676    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:53.529484    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:53.564410    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.564438    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:53.567827    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:53.614175    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.614215    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:53.620260    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:53.655138    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.655138    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:53.659487    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:53.692591    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.692591    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:53.696809    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:53.736843    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.736843    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:53.741782    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:53.770910    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.770910    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:53.775145    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:53.805756    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.805756    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:53.805756    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:53.805756    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:53.868923    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:53.868923    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:53.909599    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:53.909599    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:53.994728    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:53.985324   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.986526   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.987499   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.989286   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.991204   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:53.985324   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.986526   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.987499   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.989286   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.991204   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:53.994728    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:53.994728    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:54.023183    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:54.023245    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:56.581055    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:56.606311    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:56.640781    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.640781    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:56.645032    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:56.673780    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.673780    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:56.680498    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:56.708843    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.708843    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:56.711839    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:56.743689    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.743689    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:56.747149    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:56.776428    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.776490    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:56.780173    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:56.810171    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.810171    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:56.815860    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:56.843104    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.843150    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:56.846843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:56.875180    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.875180    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:56.875180    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:56.875260    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:56.937905    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:56.937905    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:56.978984    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:56.978984    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:57.072981    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:57.057332   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.058272   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.063163   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.064098   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.066478   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:57.057332   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.058272   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.063163   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.064098   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.066478   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:57.072981    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:57.072981    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:57.103275    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:57.103275    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:59.657150    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:59.680473    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:59.717538    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.717538    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:59.721115    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:59.750445    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.750445    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:59.754192    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:59.783080    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.783609    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:59.786966    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:59.815381    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.815381    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:59.818634    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:59.846978    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.847073    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:59.850723    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:59.881504    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.881531    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:59.885538    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:59.912091    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.912091    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:59.915555    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:59.945836    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.945836    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:59.945836    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:59.945918    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:00.010932    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:00.010932    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:00.050450    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:00.050450    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:00.135132    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:00.122005   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.123035   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.124724   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.126083   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.127122   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:00.122005   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.123035   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.124724   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.126083   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.127122   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:00.135132    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:00.135132    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:00.162951    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:00.162951    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:02.722322    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:02.747735    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:02.782353    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.782423    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:02.785942    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:02.815562    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.815562    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:02.819580    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:02.851940    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.851940    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:02.855858    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:02.883743    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.883743    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:02.887230    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:02.919540    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.919540    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:02.923123    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:02.951385    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.951439    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:02.955922    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:02.985112    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.985172    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:02.988380    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:03.020559    1436 logs.go:282] 0 containers: []
	W1210 07:34:03.020590    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:03.020590    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:03.020643    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:03.113834    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:03.100874   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.101891   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.104988   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.106378   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.107577   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:03.100874   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.101891   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.104988   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.106378   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.107577   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:03.113834    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:03.113834    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:03.143434    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:03.143494    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:03.195505    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:03.195505    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:03.260582    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:03.260582    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:05.805687    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:05.830820    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:05.867098    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.867098    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:05.870201    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:05.902724    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.902724    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:05.906452    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:05.937581    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.937660    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:05.941081    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:05.970812    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.970812    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:05.974826    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:06.005319    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.005319    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:06.009298    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:06.036331    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.036367    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:06.040396    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:06.070470    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.070522    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:06.073716    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:06.105829    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.105902    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:06.105902    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:06.105902    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:06.168761    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:06.168761    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:06.209503    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:06.209503    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:06.300233    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:06.287348   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.288220   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.291316   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.292659   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.293382   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:06.287348   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.288220   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.291316   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.292659   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.293382   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:06.300233    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:06.300233    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:06.325856    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:06.326404    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:08.888339    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:08.915007    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:08.945370    1436 logs.go:282] 0 containers: []
	W1210 07:34:08.945370    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:08.948912    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:08.978717    1436 logs.go:282] 0 containers: []
	W1210 07:34:08.978744    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:08.982191    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:09.014137    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.014137    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:09.019817    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:09.049527    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.049527    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:09.053402    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:09.083494    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.083519    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:09.087029    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:09.115269    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.115306    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:09.117873    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:09.155291    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.155351    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:09.159388    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:09.189238    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.189238    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:09.189238    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:09.189238    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:09.276866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:09.264194   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.265301   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.267799   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269023   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269943   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:09.264194   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.265301   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.267799   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269023   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269943   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:09.276924    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:09.276924    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:09.303083    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:09.303603    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:09.350941    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:09.350941    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:09.414406    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:09.414406    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:11.970539    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:11.997446    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:12.029543    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.029543    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:12.033746    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:12.061992    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.061992    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:12.066520    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:12.095801    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.095801    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:12.099364    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:12.129880    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.129949    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:12.133782    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:12.162555    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.162555    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:12.167228    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:12.196229    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.196229    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:12.200137    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:12.226729    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.226729    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:12.230279    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:12.255730    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.255730    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:12.255730    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:12.255730    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:12.318642    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:12.318642    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:12.364065    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:12.364065    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:12.469524    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:12.459675   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.460966   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.462240   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.463300   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.464338   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:12.459675   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.460966   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.462240   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.463300   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.464338   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:12.469574    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:12.469574    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:12.496807    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:12.496950    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:15.052930    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:15.080623    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:15.117403    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.117403    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:15.120370    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:15.147363    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.148371    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:15.151363    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:15.180365    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.180365    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:15.183366    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:15.215366    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.215366    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:15.218364    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:15.247369    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.247369    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:15.251365    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:15.283373    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.283373    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:15.286369    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:15.314370    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.314370    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:15.317368    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:15.347380    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.347380    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:15.347380    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:15.347380    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:15.421369    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:15.421369    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:15.458368    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:15.458368    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:15.566221    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:15.551230   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.552488   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.553348   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.556086   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.557771   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:15.551230   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.552488   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.553348   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.556086   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.557771   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:15.566279    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:15.566338    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:15.605803    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:15.605803    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:18.163754    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:18.197669    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:18.254543    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.254543    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:18.260541    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:18.293062    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.293062    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:18.296833    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:18.327885    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.327968    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:18.331280    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:18.368942    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.368942    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:18.372299    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:18.400463    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.400463    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:18.405006    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:18.446334    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.446379    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:18.449958    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:18.478295    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.478381    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:18.482123    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:18.510432    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.510506    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:18.510548    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:18.510548    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:18.572862    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:18.572862    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:18.614127    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:18.614127    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:18.702730    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:18.692245   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.693386   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.694454   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.697285   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.699129   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:18.692245   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.693386   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.694454   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.697285   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.699129   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:18.702730    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:18.702730    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:18.729639    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:18.729639    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:21.289931    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:21.315099    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:21.349129    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.349129    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:21.352917    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:21.385897    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.386013    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:21.389207    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:21.439847    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.439847    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:21.444868    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:21.473011    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.473011    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:21.476938    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:21.503941    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.503983    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:21.507954    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:21.536377    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.536377    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:21.540123    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:21.571714    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.571714    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:21.575681    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:21.605581    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.605581    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:21.605581    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:21.605581    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:21.633565    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:21.633565    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:21.687271    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:21.687271    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:21.750102    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:21.750102    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:21.792165    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:21.792165    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:21.885403    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:21.874829   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876021   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876953   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.879461   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.880406   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:21.874829   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876021   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876953   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.879461   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.880406   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:24.393597    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:24.420363    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:24.450891    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.450891    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:24.454037    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:24.483407    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.483407    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:24.489862    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:24.517830    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.517830    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:24.521711    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:24.549403    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.549403    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:24.553551    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:24.580367    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.580367    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:24.584748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:24.612646    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.612646    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:24.616710    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:24.647684    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.647753    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:24.651184    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:24.679053    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.679053    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:24.679053    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:24.679053    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:24.768115    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:24.758247   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.759411   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.760423   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.761390   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.762221   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:24.758247   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.759411   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.760423   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.761390   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.762221   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:24.768115    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:24.768115    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:24.795167    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:24.795201    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:24.844459    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:24.844459    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:24.907171    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:24.907171    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:27.453205    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:27.478026    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:27.513249    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.513249    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:27.517125    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:27.547733    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.547733    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:27.551680    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:27.577736    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.577736    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:27.581469    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:27.612483    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.612483    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:27.616434    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:27.644895    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.644895    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:27.650606    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:27.678273    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.678273    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:27.681744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:27.708604    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.708604    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:27.712244    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:27.742726    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.742726    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:27.742726    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:27.742726    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:27.807570    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:27.807570    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:27.846722    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:27.846722    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:27.929641    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:27.919463   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.920475   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.921726   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.922614   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.924717   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:27.919463   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.920475   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.921726   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.922614   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.924717   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:27.929641    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:27.929641    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:27.956087    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:27.956087    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:30.506646    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:30.530148    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:30.563444    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.563444    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:30.567219    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:30.596843    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.596843    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:30.600803    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:30.628947    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.628947    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:30.632665    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:30.663325    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.663369    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:30.667341    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:30.695640    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.695640    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:30.699545    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:30.728310    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.728310    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:30.731899    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:30.758598    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.758598    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:30.763285    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:30.792051    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.792051    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:30.792051    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:30.792051    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:30.830219    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:30.830219    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:30.919635    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:30.909299   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.910353   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.912393   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.914543   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.915506   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:30.909299   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.910353   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.912393   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.914543   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.915506   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:30.919635    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:30.919635    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:30.949360    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:30.949360    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:30.997435    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:30.997435    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:33.565782    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:33.590543    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:33.623936    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.623936    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:33.629607    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:33.664589    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.664673    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:33.668215    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:33.698892    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.698892    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:33.702344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:33.733428    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.733428    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:33.737226    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:33.764873    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.764873    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:33.768422    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:33.800350    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.800350    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:33.804811    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:33.836711    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.836711    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:33.840164    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:33.869248    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.869333    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:33.869333    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:33.869333    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:33.932626    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:33.933627    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:33.974227    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:33.974227    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:34.066031    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:34.054849   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.056230   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.057835   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.058730   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.060848   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:34.054849   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.056230   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.057835   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.058730   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.060848   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:34.066031    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:34.066031    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:34.092765    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:34.092765    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:36.652871    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:36.677531    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:36.712608    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.712608    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:36.718832    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:36.748298    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.748298    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:36.751762    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:36.783390    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.783403    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:36.787051    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:36.815730    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.815766    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:36.819100    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:36.848875    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.848875    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:36.852925    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:36.886657    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.886657    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:36.890808    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:36.920858    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.920858    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:36.924583    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:36.955882    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.955960    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:36.956001    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:36.956001    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:37.021848    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:37.021848    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:37.060744    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:37.060744    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:37.154895    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:37.142691   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.143928   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.145557   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.147806   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.149984   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:37.142691   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.143928   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.145557   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.147806   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.149984   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:37.154895    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:37.154895    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:37.182385    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:37.182385    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:39.737032    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:39.762115    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:39.792900    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.792900    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:39.797014    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:39.825423    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.825455    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:39.829352    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:39.856679    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.856679    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:39.860615    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:39.891351    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.891351    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:39.895346    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:39.924351    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.924351    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:39.928531    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:39.956447    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.956447    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:39.961810    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:39.987792    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.987792    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:39.991127    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:40.018614    1436 logs.go:282] 0 containers: []
	W1210 07:34:40.018614    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:40.018614    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:40.018614    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:40.082378    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:40.082378    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:40.123506    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:40.123506    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:40.208266    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:40.199944   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201027   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201868   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.204245   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.205189   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:40.199944   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201027   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201868   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.204245   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.205189   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:40.209272    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:40.209272    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:40.239017    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:40.239017    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:42.793527    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:42.818084    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:42.852095    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.852095    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:42.855685    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:42.883269    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.883269    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:42.887287    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:42.918719    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.918800    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:42.923828    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:42.950663    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.950663    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:42.956319    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:42.985991    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.985991    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:42.989729    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:43.017767    1436 logs.go:282] 0 containers: []
	W1210 07:34:43.017824    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:43.021689    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:43.048180    1436 logs.go:282] 0 containers: []
	W1210 07:34:43.048180    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:43.052257    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:43.081092    1436 logs.go:282] 0 containers: []
	W1210 07:34:43.081160    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:43.081183    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:43.081217    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:43.174944    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:43.162932   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.166268   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.169191   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.170321   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.171500   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:43.162932   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.166268   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.169191   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.170321   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.171500   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:43.174992    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:43.174992    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:43.202288    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:43.202807    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:43.249217    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:43.249217    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:43.311267    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:43.311267    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:45.857003    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:45.881743    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:45.911856    1436 logs.go:282] 0 containers: []
	W1210 07:34:45.911856    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:45.915335    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:45.945613    1436 logs.go:282] 0 containers: []
	W1210 07:34:45.945613    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:45.949134    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:45.977768    1436 logs.go:282] 0 containers: []
	W1210 07:34:45.977768    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:45.982182    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:46.010859    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.010859    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:46.014603    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:46.043489    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.043531    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:46.047198    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:46.080651    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.080685    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:46.084319    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:46.116705    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.116780    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:46.121508    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:46.154299    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.154299    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:46.154299    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:46.154299    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:46.222546    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:46.222546    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:46.262468    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:46.262468    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:46.349894    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:46.340418   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.341659   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.342932   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.344391   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.345361   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:46.340418   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.341659   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.342932   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.344391   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.345361   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:46.349894    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:46.349894    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:46.376804    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:46.376804    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:48.931982    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:48.957769    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:48.990182    1436 logs.go:282] 0 containers: []
	W1210 07:34:48.990182    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:48.994255    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:49.021913    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.021913    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:49.026344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:49.054704    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.054704    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:49.058471    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:49.089507    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.089559    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:49.093804    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:49.121462    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.121462    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:49.125755    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:49.156174    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.156174    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:49.160707    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:49.190933    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.190933    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:49.194771    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:49.220610    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.220610    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:49.220610    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:49.220610    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:49.283897    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:49.283897    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:49.324154    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:49.324154    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:49.412165    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:49.404459   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.405604   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.407007   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.408149   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.409161   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:49.404459   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.405604   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.407007   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.408149   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.409161   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:49.412165    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:49.413146    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:49.440045    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:49.440045    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:52.013495    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:52.044149    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:52.080205    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.080205    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:52.084762    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:52.115105    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.115105    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:52.119720    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:52.149672    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.149672    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:52.153985    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:52.186711    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.186711    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:52.192181    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:52.217751    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.217751    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:52.221590    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:52.250827    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.250876    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:52.254668    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:52.284643    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.284643    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:52.288811    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:52.316628    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.316707    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:52.316707    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:52.316707    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:52.348325    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:52.348325    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:52.408110    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:52.408110    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:52.471268    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:52.471268    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:52.511512    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:52.511512    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:52.594976    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:52.587009   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.588398   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.589811   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.591970   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.593048   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:52.587009   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.588398   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.589811   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.591970   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.593048   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:55.100294    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:55.126530    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:55.160945    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.160945    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:55.164755    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:55.196407    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.196407    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:55.199994    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:55.229174    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.229174    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:55.232898    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:55.265856    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.265856    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:55.268892    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:55.302098    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.302121    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:55.305590    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:55.335754    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.335754    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:55.339583    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:55.368170    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.368251    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:55.372008    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:55.397576    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.397576    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:55.397576    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:55.397576    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:55.434345    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:55.434345    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:55.528958    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:55.516781   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.517755   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.519593   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.520640   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.521612   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:55.516781   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.517755   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.519593   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.520640   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.521612   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:55.528958    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:55.528958    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:55.555805    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:55.555805    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:55.602232    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:55.602232    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:58.169858    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:58.195497    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:58.226557    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.226588    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:58.229677    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:58.260817    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.260817    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:58.265378    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:58.293848    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.293920    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:58.297406    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:58.326737    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.326737    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:58.330307    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:58.357319    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.357407    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:58.360727    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:58.392361    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.392405    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:58.395697    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:58.425728    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.425807    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:58.429369    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:58.457816    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.457866    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:58.457866    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:58.457866    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:58.495777    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:58.495777    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:58.585489    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:58.573271   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.574154   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.576361   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.577165   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.579860   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:58.573271   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.574154   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.576361   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.577165   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.579860   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:58.585489    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:58.585489    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:58.613007    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:58.613007    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:58.661382    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:58.661382    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:01.230900    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:01.255356    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:01.292137    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.292190    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:01.297192    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:01.328372    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.328372    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:01.332239    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:01.360635    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.360635    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:01.364529    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:01.391175    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.391175    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:01.394754    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:01.423093    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.423093    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:01.427022    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:01.454965    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.454965    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:01.459137    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:01.487734    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.487734    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:01.492051    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:01.518150    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.518150    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:01.518150    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:01.518150    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:01.580940    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:01.580940    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:01.620363    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:01.620363    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:01.710696    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:01.700163   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.701113   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.703089   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.704462   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.705476   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:01.700163   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.701113   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.703089   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.704462   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.705476   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:01.710696    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:01.710696    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:01.736867    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:01.736867    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:04.295439    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:04.322348    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:04.356895    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.356919    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:04.361858    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:04.396943    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.397019    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:04.401065    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:04.431929    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.431929    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:04.436798    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:04.468073    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.468073    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:04.472528    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:04.503230    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.503230    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:04.506632    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:04.540016    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.540016    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:04.543627    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:04.576446    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.576446    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:04.583292    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:04.611475    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.611542    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:04.611542    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:04.611542    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:04.640376    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:04.640433    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:04.695309    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:04.695309    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:04.756418    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:04.756418    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:04.795089    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:04.795089    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:04.891481    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:04.878108   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.880090   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.883096   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.885167   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.886541   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:04.878108   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.880090   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.883096   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.885167   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.886541   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:07.396688    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:07.422837    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:07.454807    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.454807    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:07.459071    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:07.489720    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.489720    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:07.493466    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:07.519982    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.519982    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:07.523858    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:07.552985    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.552985    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:07.556972    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:07.589709    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.589709    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:07.593709    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:07.621519    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.621519    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:07.625151    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:07.654324    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.654404    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:07.657279    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:07.690913    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.690966    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:07.690988    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:07.690988    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:07.757157    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:07.757157    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:07.796333    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:07.796333    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:07.893954    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:07.881331   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.882766   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.885657   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887077   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887623   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:07.881331   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.882766   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.885657   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887077   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887623   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:07.893954    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:07.893954    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:07.943452    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:07.943452    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:10.496562    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:10.522517    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:10.555517    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.555517    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:10.560160    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:10.591257    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.591306    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:10.594925    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:10.623075    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.623075    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:10.626725    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:10.654115    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.654115    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:10.658014    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:10.689683    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.689683    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:10.693386    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:10.721754    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.721754    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:10.725087    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:10.753052    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.753052    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:10.756926    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:10.787466    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.787466    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:10.787466    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:10.787466    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:10.882563    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:10.873740   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.874902   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.876114   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.877091   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.878349   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:10.873740   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.874902   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.876114   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.877091   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.878349   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:10.882563    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:10.882563    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:10.944299    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:10.944299    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:10.993835    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:10.993835    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:11.053114    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:11.053114    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:13.597304    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:13.621417    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:13.653723    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.653842    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:13.657020    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:13.690175    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.690175    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:13.693954    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:13.723350    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.723350    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:13.728514    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:13.757179    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.757179    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:13.765645    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:13.794387    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.794473    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:13.798130    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:13.826937    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.826937    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:13.830895    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:13.865171    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.865171    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:13.869540    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:13.899920    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.899920    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:13.899920    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:13.899920    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:13.964338    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:13.964338    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:14.028584    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:14.028584    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:14.067840    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:14.067840    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:14.154123    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:14.144490   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.145615   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.146725   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.148037   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.149069   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:14.144490   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.145615   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.146725   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.148037   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.149069   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:14.154123    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:14.154123    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:16.685726    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:16.716822    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:16.753764    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.753827    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:16.757211    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:16.789634    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.789634    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:16.793640    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:16.822677    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.822728    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:16.826522    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:16.853660    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.853660    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:16.858461    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:16.887452    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.887504    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:16.893014    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:16.939344    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.939344    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:16.943118    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:16.971703    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.971781    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:16.974884    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:17.003517    1436 logs.go:282] 0 containers: []
	W1210 07:35:17.003595    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:17.003595    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:17.003595    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:17.088355    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:17.079526   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.080729   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.081812   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.083165   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.084419   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:17.079526   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.080729   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.081812   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.083165   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.084419   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:17.088355    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:17.088355    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:17.117181    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:17.117241    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:17.168070    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:17.168155    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:17.231584    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:17.231584    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:19.776112    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:19.801640    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:19.835886    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.835886    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:19.839626    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:19.872127    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.872127    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:19.876526    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:19.929339    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.929339    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:19.933522    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:19.962400    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.962400    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:19.966133    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:19.994468    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.994544    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:19.998645    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:20.027252    1436 logs.go:282] 0 containers: []
	W1210 07:35:20.027252    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:20.032575    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:20.060153    1436 logs.go:282] 0 containers: []
	W1210 07:35:20.060153    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:20.065171    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:20.091891    1436 logs.go:282] 0 containers: []
	W1210 07:35:20.091891    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:20.091891    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:20.091891    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:20.131103    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:20.131103    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:20.218614    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:20.208033   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.209212   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.210215   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214139   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214965   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:20.208033   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.209212   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.210215   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214139   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214965   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:20.218614    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:20.219146    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:20.245788    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:20.245788    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:20.298111    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:20.298207    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:22.861878    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:22.887649    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:22.922573    1436 logs.go:282] 0 containers: []
	W1210 07:35:22.922573    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:22.926179    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:22.959170    1436 logs.go:282] 0 containers: []
	W1210 07:35:22.959197    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:22.963338    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:22.994510    1436 logs.go:282] 0 containers: []
	W1210 07:35:22.994566    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:22.997861    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:23.029960    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.030036    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:23.033513    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:23.064625    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.064625    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:23.069769    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:23.101906    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.101943    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:23.105651    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:23.136615    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.136615    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:23.140616    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:23.170857    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.170942    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:23.170942    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:23.170942    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:23.233098    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:23.233098    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:23.273238    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:23.273238    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:23.361638    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:23.352696   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.354050   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.356707   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.357782   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.358807   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:23.352696   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.354050   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.356707   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.357782   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.358807   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:23.361638    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:23.361638    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:23.390711    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:23.391230    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:25.949809    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:25.975470    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:26.007496    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.007496    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:26.011469    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:26.044617    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.044617    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:26.048311    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:26.078756    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.078783    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:26.082359    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:26.112113    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.112183    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:26.115713    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:26.148097    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.148097    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:26.151926    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:26.182729    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.182753    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:26.186743    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:26.217219    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.217219    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:26.223773    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:26.251643    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.251713    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:26.251713    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:26.251713    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:26.278698    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:26.278698    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:26.332014    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:26.332014    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:26.394304    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:26.394304    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:26.433073    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:26.433073    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:26.519395    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:26.506069   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.507354   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.509591   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.512516   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.514125   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:26.506069   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.507354   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.509591   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.512516   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.514125   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:29.024398    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:29.049372    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:29.084989    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.085019    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:29.089078    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:29.116420    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.116420    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:29.120531    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:29.149880    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.149880    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:29.153505    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:29.181726    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.181790    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:29.185295    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:29.216713    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.216713    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:29.222568    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:29.249487    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.249487    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:29.253512    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:29.283473    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.283497    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:29.287061    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:29.313225    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.313225    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:29.313225    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:29.313225    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:29.399665    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:29.386954   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.388181   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.390621   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.391811   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.393167   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:29.386954   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.388181   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.390621   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.391811   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.393167   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:29.399665    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:29.399665    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:29.428593    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:29.428593    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:29.477815    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:29.477877    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:29.541874    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:29.541874    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:32.087876    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:32.113456    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:32.145773    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.145805    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:32.149787    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:32.178912    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.178987    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:32.182700    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:32.213301    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.213301    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:32.217129    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:32.246756    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.246824    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:32.250299    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:32.278791    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.278835    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:32.282397    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:32.316208    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.316278    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:32.320233    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:32.349155    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.349155    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:32.352807    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:32.386875    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.386875    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:32.386944    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:32.386944    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:32.479781    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:32.469750   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.470693   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.473307   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.474321   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.475302   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:32.469750   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.470693   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.473307   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.474321   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.475302   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:32.479781    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:32.479781    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:32.506994    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:32.506994    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:32.561757    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:32.561757    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:32.624545    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:32.624545    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:35.176040    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:35.201056    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:35.235735    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.235735    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:35.239655    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:35.267349    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.267416    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:35.270515    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:35.303264    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.303264    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:35.306371    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:35.339037    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.339263    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:35.343297    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:35.375639    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.375639    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:35.379647    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:35.407670    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.407670    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:35.411506    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:35.446240    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.446240    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:35.450265    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:35.477814    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.477814    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:35.477814    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:35.477814    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:35.541174    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:35.541174    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:35.581633    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:35.581633    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:35.673254    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:35.664165   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.665345   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.666190   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.668510   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.669467   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:35.664165   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.665345   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.666190   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.668510   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.669467   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:35.673254    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:35.673254    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:35.701200    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:35.701200    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:38.255869    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:38.281759    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:38.316123    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.316123    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:38.319358    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:38.348903    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.348943    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:38.352900    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:38.381759    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.381795    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:38.385361    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:38.414524    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.414586    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:38.417710    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:38.447131    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.447205    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:38.451100    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:38.479508    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.479543    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:38.483003    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:38.512848    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.512848    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:38.516967    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:38.547680    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.547680    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:38.547680    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:38.547680    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:38.614038    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:38.614038    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:38.658448    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:38.658448    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:38.743054    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:38.733038   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.734073   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.735791   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.738099   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.739595   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:38.733038   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.734073   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.735791   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.738099   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.739595   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:38.743054    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:38.743054    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:38.775152    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:38.775214    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:41.333835    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:41.358081    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:41.393471    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.393471    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:41.396774    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:41.425173    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.425224    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:41.428523    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:41.456663    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.456663    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:41.459654    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:41.490212    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.490212    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:41.493250    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:41.523505    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.523505    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:41.527006    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:41.555529    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.555529    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:41.559605    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:41.590913    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.591011    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:41.596392    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:41.627361    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.627421    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:41.627441    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:41.627538    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:41.692948    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:41.692948    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:41.731909    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:41.731909    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:41.816121    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:41.806508   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.807705   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.808985   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.810306   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.811462   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:41.806508   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.807705   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.808985   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.810306   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.811462   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:41.816121    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:41.816121    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:41.844622    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:41.844622    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:44.401865    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:44.426294    1436 out.go:203] 
	W1210 07:35:44.428631    1436 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1210 07:35:44.428631    1436 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1210 07:35:44.428631    1436 out.go:285] * Related issues:
	* Related issues:
	W1210 07:35:44.428631    1436 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1210 07:35:44.428631    1436 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1210 07:35:44.430629    1436 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p newest-cni-525200 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-rc.1": exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-525200
helpers_test.go:244: (dbg) docker inspect newest-cni-525200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188",
	        "Created": "2025-12-10T07:18:58.277037255Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 463220,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:29:29.73179662Z",
	            "FinishedAt": "2025-12-10T07:29:26.920141661Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188/hostname",
	        "HostsPath": "/var/lib/docker/containers/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188/hosts",
	        "LogPath": "/var/lib/docker/containers/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188-json.log",
	        "Name": "/newest-cni-525200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-525200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-525200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7e31d32294dc0b8b9ba09ebc0adce95d5ecde79d96faff02ddfcca2df2a49118-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7e31d32294dc0b8b9ba09ebc0adce95d5ecde79d96faff02ddfcca2df2a49118/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7e31d32294dc0b8b9ba09ebc0adce95d5ecde79d96faff02ddfcca2df2a49118/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7e31d32294dc0b8b9ba09ebc0adce95d5ecde79d96faff02ddfcca2df2a49118/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-525200",
	                "Source": "/var/lib/docker/volumes/newest-cni-525200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-525200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-525200",
	                "name.minikube.sigs.k8s.io": "newest-cni-525200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c6405ded628bc55282f5002d4bd683ef72ad68a142c14324a7fe852f16eb1d8f",
	            "SandboxKey": "/var/run/docker/netns/c6405ded628b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57760"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57761"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57762"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57763"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57764"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-525200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.121.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:79:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e73cdc5fd1be9396722947f498060ee7b5757251a78043b99e30abfea0ec658b",
	                    "EndpointID": "bf76bc1596f8833f7b9c83f8bb2261128b3871775b4118fe4c99fcdac5e453d3",
	                    "Gateway": "192.168.121.1",
	                    "IPAddress": "192.168.121.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-525200",
	                        "6b7f9063cbda"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-525200 -n newest-cni-525200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-525200 -n newest-cni-525200: exit status 2 (607.7666ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-525200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-525200 logs -n 25: (1.4707453s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                          ARGS                                          │        PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/kube-flannel/cni-conf.json                      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status kubelet --all --full --no-pager         │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat kubelet --no-pager                         │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo journalctl -xeu kubelet --all --full --no-pager          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/kubernetes/kubelet.conf                         │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /var/lib/kubelet/config.yaml                         │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status docker --all --full --no-pager          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat docker --no-pager                          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/docker/daemon.json                              │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo docker system info                                       │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status cri-docker --all --full --no-pager      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat cri-docker --no-pager                      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /usr/lib/systemd/system/cri-docker.service           │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cri-dockerd --version                                    │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status containerd --all --full --no-pager      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat containerd --no-pager                      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /lib/systemd/system/containerd.service               │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/containerd/config.toml                          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo containerd config dump                                   │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status crio --all --full --no-pager            │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat crio --no-pager                            │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo crio config                                              │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ delete  │ -p custom-flannel-648600                                                               │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:31:27
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:31:27.429465    2240 out.go:360] Setting OutFile to fd 1904 ...
	I1210 07:31:27.483636    2240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:31:27.483636    2240 out.go:374] Setting ErrFile to fd 1148...
	I1210 07:31:27.483636    2240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:31:27.498633    2240 out.go:368] Setting JSON to false
	I1210 07:31:27.500624    2240 start.go:133] hostinfo: {"hostname":"minikube4","uptime":10819,"bootTime":1765341068,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 07:31:27.500624    2240 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 07:31:27.505874    2240 out.go:179] * [custom-flannel-648600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 07:31:27.510785    2240 notify.go:221] Checking for updates...
	I1210 07:31:27.513604    2240 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:31:27.516776    2240 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:31:27.521423    2240 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 07:31:27.524646    2240 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:31:27.526628    2240 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1210 07:31:23.340249    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:31:27.530138    2240 config.go:182] Loaded profile config "false-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:31:27.530637    2240 config.go:182] Loaded profile config "newest-cni-525200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:31:27.530927    2240 config.go:182] Loaded profile config "no-preload-099700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:31:27.531072    2240 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:31:27.674116    2240 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 07:31:27.679999    2240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:31:27.935225    2240 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:31:27.906881904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:31:27.940210    2240 out.go:179] * Using the docker driver based on user configuration
	I1210 07:31:27.947210    2240 start.go:309] selected driver: docker
	I1210 07:31:27.947210    2240 start.go:927] validating driver "docker" against <nil>
	I1210 07:31:27.947210    2240 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:31:28.038927    2240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:31:28.306393    2240 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:31:28.276193336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:31:28.307456    2240 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:31:28.308474    2240 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:31:28.311999    2240 out.go:179] * Using Docker Desktop driver with root privileges
	I1210 07:31:28.314563    2240 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1210 07:31:28.314921    2240 start_flags.go:336] Found "testdata\\kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1210 07:31:28.314921    2240 start.go:353] cluster config:
	{Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:31:28.317704    2240 out.go:179] * Starting "custom-flannel-648600" primary control-plane node in "custom-flannel-648600" cluster
	I1210 07:31:28.318967    2240 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 07:31:28.320981    2240 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:31:23.421229    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:23.421229    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:23.460218    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:23.460218    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:23.544413    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:23.535582    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.536730    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.537358    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.539749    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.540912    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:23.535582    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.536730    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.537358    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.539749    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.540912    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:26.050161    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:26.077105    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:26.111827    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.111827    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:26.116713    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:26.160114    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.160114    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:26.163744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:26.201139    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.201139    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:26.204831    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:26.240411    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.240462    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:26.244533    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:26.280463    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.280463    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:26.285443    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:26.317450    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.317450    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:26.320454    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:26.356058    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.356058    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:26.360642    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:26.406955    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.406994    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:26.407032    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:26.407032    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:26.486801    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:26.486845    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:26.525844    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:26.525844    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:26.629730    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:26.619679    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.620896    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.621633    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623054    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623677    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:26.619679    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.620896    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.621633    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623054    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623677    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:26.630733    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:26.630733    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:26.786973    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:26.786973    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:28.323967    2240 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:31:28.323967    2240 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 07:31:28.370604    2240 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:31:28.410253    2240 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:31:28.410253    2240 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:31:28.586590    2240 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:31:28.586590    2240 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\config.json ...
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:31:28.586590    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\config.json: {Name:mk37135597d0b3e0094e1cb1b5ff50d942db06b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:31:28.587928    2240 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:31:28.587928    2240 start.go:360] acquireMachinesLock for custom-flannel-648600: {Name:mk4a3a34c58cff29c46217d57a91ed79fc9f522b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:28.588459    2240 start.go:364] duration metric: took 531.3µs to acquireMachinesLock for "custom-flannel-648600"
	I1210 07:31:28.588615    2240 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false D
isableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:31:28.588742    2240 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:31:28.592548    2240 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:31:28.593172    2240 start.go:159] libmachine.API.Create for "custom-flannel-648600" (driver="docker")
	I1210 07:31:28.593172    2240 client.go:173] LocalClient.Create starting
	I1210 07:31:28.593172    2240 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1210 07:31:28.594367    2240 main.go:143] libmachine: Decoding PEM data...
	I1210 07:31:28.594367    2240 main.go:143] libmachine: Parsing certificate...
	I1210 07:31:28.594367    2240 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1210 07:31:28.595268    2240 main.go:143] libmachine: Decoding PEM data...
	I1210 07:31:28.595268    2240 main.go:143] libmachine: Parsing certificate...
	I1210 07:31:28.601656    2240 cli_runner.go:164] Run: docker network inspect custom-flannel-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:31:28.702719    2240 cli_runner.go:211] docker network inspect custom-flannel-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:31:28.710721    2240 network_create.go:284] running [docker network inspect custom-flannel-648600] to gather additional debugging logs...
	I1210 07:31:28.710721    2240 cli_runner.go:164] Run: docker network inspect custom-flannel-648600
	W1210 07:31:28.938963    2240 cli_runner.go:211] docker network inspect custom-flannel-648600 returned with exit code 1
	I1210 07:31:28.938963    2240 network_create.go:287] error running [docker network inspect custom-flannel-648600]: docker network inspect custom-flannel-648600: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-648600 not found
	I1210 07:31:28.938963    2240 network_create.go:289] output of [docker network inspect custom-flannel-648600]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-648600 not found
	
	** /stderr **
	I1210 07:31:28.945949    2240 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:31:29.091971    2240 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:29.381586    2240 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:29.465291    2240 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016a8ae0}
	I1210 07:31:29.465291    2240 network_create.go:124] attempt to create docker network custom-flannel-648600 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1210 07:31:29.470056    2240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600
	W1210 07:31:30.046347    2240 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600 returned with exit code 1
	W1210 07:31:30.046347    2240 network_create.go:149] failed to create docker network custom-flannel-648600 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1210 07:31:30.046347    2240 network_create.go:116] failed to create docker network custom-flannel-648600 192.168.67.0/24, will retry: subnet is taken
	I1210 07:31:30.140283    2240 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:30.262644    2240 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017e1d40}
	I1210 07:31:30.262866    2240 network_create.go:124] attempt to create docker network custom-flannel-648600 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 07:31:30.267646    2240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600
	W1210 07:31:30.581811    2240 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600 returned with exit code 1
	W1210 07:31:30.581811    2240 network_create.go:149] failed to create docker network custom-flannel-648600 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1210 07:31:30.581811    2240 network_create.go:116] failed to create docker network custom-flannel-648600 192.168.76.0/24, will retry: subnet is taken
	I1210 07:31:30.621040    2240 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:30.648052    2240 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cde450}
	I1210 07:31:30.648052    2240 network_create.go:124] attempt to create docker network custom-flannel-648600 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1210 07:31:30.656045    2240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600
	I1210 07:31:30.870907    2240 network_create.go:108] docker network custom-flannel-648600 192.168.85.0/24 created
	I1210 07:31:30.870907    2240 kic.go:121] calculated static IP "192.168.85.2" for the "custom-flannel-648600" container
	I1210 07:31:30.881906    2240 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:31:31.006456    2240 cli_runner.go:164] Run: docker volume create custom-flannel-648600 --label name.minikube.sigs.k8s.io=custom-flannel-648600 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:31:31.098467    2240 oci.go:103] Successfully created a docker volume custom-flannel-648600
	I1210 07:31:31.104469    2240 cli_runner.go:164] Run: docker run --rm --name custom-flannel-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-648600 --entrypoint /usr/bin/test -v custom-flannel-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 07:31:31.792496    2240 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.792496    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1210 07:31:31.792496    2240 cache.go:107] acquiring lock: {Name:mk712909f8cebe825fb8a435a44c90c8d2ca7c32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.792496    2240 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.2058554s
	I1210 07:31:31.792496    2240 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1210 07:31:31.792496    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 exists
	I1210 07:31:31.792496    2240 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.34.3" took 3.2053301s
	I1210 07:31:31.792496    2240 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 succeeded
	I1210 07:31:31.794500    2240 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.794500    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1210 07:31:31.794500    2240 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.2078599s
	I1210 07:31:31.795487    2240 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1210 07:31:31.796493    2240 cache.go:107] acquiring lock: {Name:mk4347e5873954a8abe84535c3dbf0018de433f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.796493    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 exists
	I1210 07:31:31.796493    2240 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.34.3" took 3.2098526s
	I1210 07:31:31.796493    2240 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 succeeded
	I1210 07:31:31.809204    2240 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.809204    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1210 07:31:31.809204    2240 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.2225634s
	I1210 07:31:31.809728    2240 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1210 07:31:31.821783    2240 cache.go:107] acquiring lock: {Name:mk2c17fa70a505b9214cef4e32cab64efc0821c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.822582    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 exists
	I1210 07:31:31.822582    2240 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.34.3" took 3.2354164s
	I1210 07:31:31.822582    2240 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 succeeded
	I1210 07:31:31.828690    2240 cache.go:107] acquiring lock: {Name:mk6b392ee3857c8c549222e5d4bc0d459f0d6374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.828690    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 exists
	I1210 07:31:31.828690    2240 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.12.1" took 3.2420491s
	I1210 07:31:31.828690    2240 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 succeeded
	I1210 07:31:31.868175    2240 cache.go:107] acquiring lock: {Name:mkc3ba12d2dc6e754bbb22b72e061ebdaf85fee6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.869189    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 exists
	I1210 07:31:31.869189    2240 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.34.3" took 3.2820228s
	I1210 07:31:31.869189    2240 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 succeeded
	I1210 07:31:31.869189    2240 cache.go:87] Successfully saved all images to host disk.
	I1210 07:31:29.397246    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:29.477876    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:29.605797    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.605797    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:29.612110    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:29.728807    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.728807    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:29.734404    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:29.836328    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.836328    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:29.841346    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:29.932721    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.933712    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:29.938725    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:30.029301    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.029301    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:30.034503    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:30.132157    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.132157    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:30.137284    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:30.276443    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.276443    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:30.284280    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:30.440215    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.440215    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:30.440215    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:30.440215    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:30.586863    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:30.586863    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:30.654056    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:30.654056    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:30.825025    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:30.810195    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.812029    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.813542    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.815272    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.818214    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:30.810195    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.812029    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.813542    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.815272    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.818214    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:30.825083    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:30.825083    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:30.883913    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:30.883913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:32.772569    2240 cli_runner.go:217] Completed: docker run --rm --name custom-flannel-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-648600 --entrypoint /usr/bin/test -v custom-flannel-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.6680738s)
	I1210 07:31:32.772569    2240 oci.go:107] Successfully prepared a docker volume custom-flannel-648600
	I1210 07:31:32.772569    2240 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:31:32.777565    2240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:31:33.023291    2240 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:31:33.001747684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:31:33.027286    2240 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:31:33.264619    2240 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-648600 --name custom-flannel-648600 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-648600 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-648600 --network custom-flannel-648600 --ip 192.168.85.2 --volume custom-flannel-648600:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 07:31:34.003194    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Running}}
	I1210 07:31:34.069196    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:31:34.137196    2240 cli_runner.go:164] Run: docker exec custom-flannel-648600 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:31:34.255530    2240 oci.go:144] the created container "custom-flannel-648600" has a running status.
	I1210 07:31:34.255530    2240 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa...
	I1210 07:31:34.371827    2240 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:31:34.454671    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:31:34.514682    2240 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:31:34.514682    2240 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-648600 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:31:34.665673    2240 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa...
	I1210 07:31:37.044619    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:31:37.095607    2240 machine.go:94] provisionDockerMachine start ...
	I1210 07:31:37.098607    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:37.155601    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:37.171620    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:37.171620    2240 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:31:37.347331    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-648600
	
	I1210 07:31:37.347331    2240 ubuntu.go:182] provisioning hostname "custom-flannel-648600"
	I1210 07:31:37.350327    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:37.408671    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:37.409222    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:37.409222    2240 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-648600 && echo "custom-flannel-648600" | sudo tee /etc/hostname
	W1210 07:31:33.500806    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:31:33.522798    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:33.542801    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:33.574796    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.574796    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:33.577799    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:33.609805    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.609805    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:33.613806    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:33.647528    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.647528    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:33.650525    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:33.682527    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.683531    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:33.686536    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:33.715528    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.715528    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:33.718520    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:33.752522    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.752522    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:33.755526    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:33.789961    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.789961    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:33.794804    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:33.824805    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.824805    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:33.824805    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:33.824805    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:33.908771    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:33.908771    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:33.958763    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:33.958763    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:34.080194    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:34.067865    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.069363    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.070689    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.072220    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.073030    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:34.067865    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.069363    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.070689    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.072220    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.073030    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:34.080194    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:34.080194    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:34.114208    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:34.114208    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:36.683658    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:36.704830    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:36.739690    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.739690    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:36.742694    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:36.772249    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.772249    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:36.776265    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:36.812803    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.812803    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:36.816811    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:36.849259    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.849259    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:36.852518    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:36.890605    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.890605    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:36.895610    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:36.937605    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.937605    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:36.942601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:36.979599    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.979599    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:36.984601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:37.022606    1436 logs.go:282] 0 containers: []
	W1210 07:31:37.022606    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:37.022606    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:37.022606    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:37.086612    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:37.086612    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:37.128602    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:37.128602    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:37.225605    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:37.215773    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.216591    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.218566    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.219365    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.221626    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:37.215773    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.216591    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.218566    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.219365    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.221626    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:37.225605    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:37.225605    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:37.254615    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:37.254615    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:37.617301    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-648600
	
	I1210 07:31:37.621329    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:37.680493    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:37.681514    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:37.681514    2240 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-648600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-648600/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-648600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:31:37.850452    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:31:37.850452    2240 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 07:31:37.850452    2240 ubuntu.go:190] setting up certificates
	I1210 07:31:37.850452    2240 provision.go:84] configureAuth start
	I1210 07:31:37.855263    2240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-648600
	I1210 07:31:37.926854    2240 provision.go:143] copyHostCerts
	I1210 07:31:37.927569    2240 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 07:31:37.927608    2240 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 07:31:37.928059    2240 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 07:31:37.928961    2240 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 07:31:37.928961    2240 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 07:31:37.928961    2240 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 07:31:37.930358    2240 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 07:31:37.930390    2240 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 07:31:37.930744    2240 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 07:31:37.931754    2240 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.custom-flannel-648600 san=[127.0.0.1 192.168.85.2 custom-flannel-648600 localhost minikube]
	I1210 07:31:38.038131    2240 provision.go:177] copyRemoteCerts
	I1210 07:31:38.042277    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:31:38.045314    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.098793    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:38.243502    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:31:38.284050    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I1210 07:31:38.320436    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:31:38.351829    2240 provision.go:87] duration metric: took 501.3694ms to configureAuth
	I1210 07:31:38.351829    2240 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:31:38.352840    2240 config.go:182] Loaded profile config "custom-flannel-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:31:38.355824    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.405824    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:38.405824    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:38.405824    2240 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 07:31:38.582107    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 07:31:38.582107    2240 ubuntu.go:71] root file system type: overlay
	I1210 07:31:38.582107    2240 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 07:31:38.585874    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.646407    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:38.646407    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:38.646407    2240 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 07:31:38.847766    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 07:31:38.852241    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.938899    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:38.938899    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:38.938899    2240 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 07:31:40.711527    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-10 07:31:38.832035101 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1210 07:31:40.711665    2240 machine.go:97] duration metric: took 3.616002s to provisionDockerMachine
	I1210 07:31:40.711665    2240 client.go:176] duration metric: took 12.1183047s to LocalClient.Create
	I1210 07:31:40.711665    2240 start.go:167] duration metric: took 12.1183047s to libmachine.API.Create "custom-flannel-648600"
	I1210 07:31:40.711665    2240 start.go:293] postStartSetup for "custom-flannel-648600" (driver="docker")
	I1210 07:31:40.711665    2240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:31:40.715645    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:31:40.718723    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:40.776513    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:40.917451    2240 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:31:40.923444    2240 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:31:40.923444    2240 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:31:40.923444    2240 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 07:31:40.924452    2240 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 07:31:40.924452    2240 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 07:31:40.929458    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:31:40.942452    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 07:31:40.977491    2240 start.go:296] duration metric: took 265.8211ms for postStartSetup
	I1210 07:31:40.981481    2240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-648600
	I1210 07:31:41.034489    2240 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\config.json ...
	I1210 07:31:41.039496    2240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:31:41.043532    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:41.111672    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:41.255080    2240 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:31:41.269938    2240 start.go:128] duration metric: took 12.6809984s to createHost
	I1210 07:31:41.269938    2240 start.go:83] releasing machines lock for "custom-flannel-648600", held for 12.6812262s
	I1210 07:31:41.273664    2240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-648600
	I1210 07:31:41.324666    2240 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 07:31:41.329678    2240 ssh_runner.go:195] Run: cat /version.json
	I1210 07:31:41.329678    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:41.334670    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:41.381680    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:41.381680    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	W1210 07:31:41.497715    2240 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 07:31:41.501431    2240 ssh_runner.go:195] Run: systemctl --version
	I1210 07:31:41.518880    2240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:31:41.528176    2240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:31:41.531184    2240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:31:41.579185    2240 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 07:31:41.579185    2240 start.go:496] detecting cgroup driver to use...
	I1210 07:31:41.579185    2240 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:31:41.579185    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1210 07:31:41.596178    2240 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 07:31:41.596178    2240 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 07:31:41.606178    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:31:41.626187    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:31:41.641198    2240 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:31:41.645182    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:31:41.668187    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:31:41.687179    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:31:41.706179    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:31:41.724180    2240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:31:41.742180    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:31:41.759185    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:31:41.778184    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:31:41.795180    2240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:31:41.811185    2240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:31:41.828187    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:41.983806    2240 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:31:42.163822    2240 start.go:496] detecting cgroup driver to use...
	I1210 07:31:42.163822    2240 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:31:42.167818    2240 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 07:31:42.193819    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:31:42.216825    2240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:31:42.280833    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:31:42.301820    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:31:42.320823    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:31:42.345832    2240 ssh_runner.go:195] Run: which cri-dockerd
	I1210 07:31:42.358831    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 07:31:42.373835    2240 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 07:31:42.401822    2240 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 07:31:39.808959    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:39.828946    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:39.859949    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.859949    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:39.862944    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:39.896961    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.896961    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:39.901952    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:39.936950    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.936950    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:39.939955    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:39.969949    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.969949    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:39.972954    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:40.002949    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.002949    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:40.006946    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:40.036957    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.036957    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:40.039947    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:40.098959    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.098959    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:40.102955    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:40.149957    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.149957    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:40.149957    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:40.149957    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:40.191850    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:40.192845    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:40.293665    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:40.277190    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283148    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283982    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.286428    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.287149    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:40.277190    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283148    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283982    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.286428    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.287149    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:40.293665    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:40.293665    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:40.325883    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:40.325883    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:40.379885    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:40.379885    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:42.947835    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:42.966833    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:43.000857    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.000857    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:43.003835    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:43.034830    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.034830    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:43.037843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:43.069836    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.069836    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:43.073842    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:43.105424    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.105465    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:43.109492    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:43.143411    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.143411    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:43.147409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:43.179168    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.179168    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:43.183167    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:43.211281    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.211281    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:43.214141    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:43.248141    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.248141    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:43.248141    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:43.248141    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:43.314876    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:43.314876    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:43.357233    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:43.357233    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 07:31:42.551686    2240 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 07:31:42.712827    2240 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 07:31:42.712827    2240 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 07:31:42.735824    2240 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 07:31:42.756828    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:42.906845    2240 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 07:31:43.937123    2240 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0302614s)
	I1210 07:31:43.944887    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:31:43.971819    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 07:31:43.996364    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:31:44.030377    2240 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 07:31:44.173489    2240 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 07:31:44.332105    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:44.483148    2240 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 07:31:44.509404    2240 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 07:31:44.533765    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:44.690011    2240 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 07:31:44.790147    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:31:44.810716    2240 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 07:31:44.813714    2240 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 07:31:44.820719    2240 start.go:564] Will wait 60s for crictl version
	I1210 07:31:44.824717    2240 ssh_runner.go:195] Run: which crictl
	I1210 07:31:44.835701    2240 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:31:44.880457    2240 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 07:31:44.883920    2240 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:31:44.928460    2240 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:31:45.060104    2240 out.go:252] * Preparing Kubernetes v1.34.3 on Docker 29.1.2 ...
	I1210 07:31:45.062900    2240 cli_runner.go:164] Run: docker exec -t custom-flannel-648600 dig +short host.docker.internal
	I1210 07:31:45.193754    2240 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 07:31:45.197851    2240 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 07:31:45.204880    2240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:31:45.225085    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:45.282870    2240 kubeadm.go:884] updating cluster {Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:31:45.283875    2240 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:31:45.286873    2240 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 07:31:45.317078    2240 docker.go:691] Got preloaded images: 
	I1210 07:31:45.317078    2240 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.3 wasn't preloaded
	I1210 07:31:45.317078    2240 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.3 registry.k8s.io/kube-controller-manager:v1.34.3 registry.k8s.io/kube-scheduler:v1.34.3 registry.k8s.io/kube-proxy:v1.34.3 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 07:31:45.330428    2240 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:45.336331    2240 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.341435    2240 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:45.341435    2240 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.347452    2240 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.347452    2240 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.352434    2240 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:45.355426    2240 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.358455    2240 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:45.361429    2240 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.365434    2240 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 07:31:45.366439    2240 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:45.369440    2240 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:45.370428    2240 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:45.374431    2240 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 07:31:45.379430    2240 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	W1210 07:31:45.411422    2240 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.466193    2240 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.518621    2240 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.573883    2240 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.622874    2240 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.672905    2240 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.723034    2240 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.771034    2240 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.12.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 07:31:45.842424    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.842823    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.869734    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.884370    2240 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.3" does not exist at hash "aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c" in container runtime
	I1210 07:31:45.884370    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:31:45.884370    2240 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.3" needs transfer: "registry.k8s.io/kube-proxy:v1.34.3" does not exist at hash "36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691" in container runtime
	I1210 07:31:45.884370    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:31:45.884370    2240 docker.go:338] Removing image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.884370    2240 docker.go:338] Removing image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.890739    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.890951    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.897121    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:45.901151    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:45.922366    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 07:31:45.956325    2240 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.3" does not exist at hash "5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942" in container runtime
	I1210 07:31:45.956325    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:31:45.956325    2240 docker.go:338] Removing image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.961320    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.992754    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:46.045432    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:31:46.045432    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:31:46.053082    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:31:46.053082    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:31:46.059786    2240 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.3" does not exist at hash "aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78" in container runtime
	I1210 07:31:46.059786    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:31:46.059786    2240 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 07:31:46.060783    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:31:46.060783    2240 docker.go:338] Removing image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:46.060783    2240 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:46.065694    2240 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 07:31:46.065694    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:31:46.065694    2240 docker.go:338] Removing image: registry.k8s.io/pause:3.10.1
	I1210 07:31:46.067530    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:46.067911    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:46.068609    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:31:46.070610    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1210 07:31:46.073597    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:46.074603    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:31:46.146816    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.3': No such file or directory
	I1210 07:31:46.146816    2240 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1210 07:31:46.146816    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.3': No such file or directory
	I1210 07:31:46.146816    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:31:46.147805    2240 docker.go:338] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:46.147805    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 --> /var/lib/minikube/images/kube-apiserver_v1.34.3 (27075584 bytes)
	I1210 07:31:46.147805    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 --> /var/lib/minikube/images/kube-proxy_v1.34.3 (25966592 bytes)
	I1210 07:31:46.151807    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:46.255114    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:31:46.255114    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:31:46.261151    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:31:46.262119    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:31:46.272115    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:31:46.272115    2240 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 07:31:46.272115    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:31:46.272115    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.3': No such file or directory
	I1210 07:31:46.272115    2240 docker.go:338] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:46.272115    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 --> /var/lib/minikube/images/kube-controller-manager_v1.34.3 (22830080 bytes)
	I1210 07:31:46.277116    2240 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:46.278121    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 07:31:46.289109    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:31:46.293116    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:31:46.475787    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.3': No such file or directory
	I1210 07:31:46.475787    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 07:31:46.475787    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 07:31:46.475787    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 --> /var/lib/minikube/images/kube-scheduler_v1.34.3 (17393664 bytes)
	I1210 07:31:46.476808    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 07:31:46.476808    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:31:46.476808    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 07:31:46.476808    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1210 07:31:46.476808    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1210 07:31:46.481795    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:31:46.504793    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 07:31:46.504793    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 07:31:46.672791    2240 docker.go:305] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 07:31:46.672791    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1210 07:31:47.172597    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 from cache
	I1210 07:31:47.208589    2240 docker.go:305] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:31:47.208589    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	W1210 07:31:43.531620    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:31:43.451546    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:43.441909    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443033    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443856    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.446062    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.447664    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:43.441909    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443033    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443856    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.446062    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.447664    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:43.452560    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:43.452560    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:43.479539    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:43.479539    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:46.056731    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:46.081601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:46.111531    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.111531    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:46.116512    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:46.149808    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.149808    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:46.155807    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:46.190791    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.190791    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:46.193789    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:46.232109    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.232109    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:46.235109    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:46.269122    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.269122    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:46.273122    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:46.302130    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.302130    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:46.306119    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:46.338110    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.338110    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:46.341114    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:46.370305    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.370305    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:46.370305    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:46.370305    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:46.438787    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:46.438787    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:46.605791    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:46.605791    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:46.756762    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:46.747167    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.748310    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.749473    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.750856    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.751642    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:46.747167    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.748310    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.749473    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.750856    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.751642    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:46.756762    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:46.756762    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:46.793764    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:46.793764    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:48.287161    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load": (1.0785558s)
	I1210 07:31:48.287161    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I1210 07:31:48.287161    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:31:48.287161    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.34.3 | docker load"
	I1210 07:31:51.130300    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.34.3 | docker load": (2.8430943s)
	I1210 07:31:51.130300    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 from cache
	I1210 07:31:51.130300    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:31:51.130300    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.34.3 | docker load"
	I1210 07:31:52.383759    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.34.3 | docker load": (1.2534401s)
	I1210 07:31:52.383759    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 from cache
	I1210 07:31:52.383759    2240 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:31:52.383759    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	I1210 07:31:49.381174    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:49.403703    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:49.436264    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.436317    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:49.440617    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:49.468917    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.468982    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:49.472677    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:49.499977    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.499977    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:49.504116    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:49.536309    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.536350    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:49.540463    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:49.568274    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.568274    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:49.572177    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:49.600130    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.600130    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:49.604000    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:49.632645    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.632645    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:49.636092    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:49.667017    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.667017    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:49.667017    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:49.667017    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:49.705515    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:49.705515    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:49.790780    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:49.782366    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.783658    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.784961    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.786128    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.787161    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:49.782366    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.783658    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.784961    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.786128    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.787161    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:49.790780    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:49.790780    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:49.817781    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:49.817781    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:49.871600    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:49.871674    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:52.448511    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:52.475325    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:52.506360    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.506360    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:52.510172    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:52.540147    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.540147    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:52.544437    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:52.575774    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.575774    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:52.579336    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:52.610061    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.610061    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:52.613342    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:52.642765    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.642765    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:52.649215    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:52.678701    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.678701    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:52.682526    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:52.710203    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.710203    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:52.715870    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:52.745326    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.745351    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:52.745351    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:52.745397    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:52.811401    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:52.811401    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:52.853138    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:52.853138    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:52.968335    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:52.959589    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.960835    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.961833    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.962872    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.963541    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:52.959589    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.960835    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.961833    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.962872    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.963541    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:52.968335    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:52.968335    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:52.995279    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:52.995802    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:55.245680    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (2.8618761s)
	I1210 07:31:55.245680    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1210 07:31:55.246466    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:31:55.246522    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.34.3 | docker load"
	I1210 07:31:56.790187    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.34.3 | docker load": (1.5436405s)
	I1210 07:31:56.790187    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 from cache
	I1210 07:31:56.790187    2240 docker.go:305] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:31:56.790187    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.12.1 | docker load"
	W1210 07:31:53.564945    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:31:55.548093    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:55.571449    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:55.603901    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.603970    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:55.607695    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:55.639065    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.639065    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:55.643536    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:55.671930    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.671930    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:55.675998    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:55.704460    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.704460    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:55.708947    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:55.739257    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.739257    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:55.742852    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:55.772295    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.772344    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:55.776423    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:55.803812    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.803812    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:55.809849    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:55.841586    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.841647    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:55.841647    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:55.841647    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:55.916368    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:55.916368    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:55.958653    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:55.958653    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:56.055702    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:56.041972    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.043936    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.045926    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.047763    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.048444    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:56.041972    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.043936    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.045926    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.047763    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.048444    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:56.055702    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:56.055702    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:56.084883    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:56.084883    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:01.290113    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.12.1 | docker load": (4.4998566s)
	I1210 07:32:01.290113    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 from cache
	I1210 07:32:01.290113    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:32:01.290113    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.34.3 | docker load"
	I1210 07:31:58.642350    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:58.668189    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:58.699633    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.699633    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:58.705036    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:58.738553    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.738553    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:58.742579    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:58.772414    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.772414    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:58.775757    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:58.804872    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.804872    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:58.808509    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:58.835398    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.835398    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:58.843124    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:58.871465    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.871465    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:58.875535    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:58.905029    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.905108    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:58.910324    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:58.953100    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.953100    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:58.953100    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:58.953100    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:59.012946    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:59.012946    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:59.052964    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:59.052964    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:59.146228    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:59.133962    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.135271    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.136467    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.137230    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.139872    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:59.133962    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.135271    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.136467    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.137230    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.139872    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:59.146228    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:59.146228    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:59.173200    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:59.173200    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:01.725170    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:01.746739    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:01.779670    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.779670    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:01.783967    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:01.812617    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.812617    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:01.817482    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:01.848083    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.848083    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:01.852344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:01.883648    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.883648    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:01.887655    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:01.918403    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.918403    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:01.922409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:01.961721    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.961721    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:01.969744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:01.998302    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.998302    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:02.003804    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:02.032315    1436 logs.go:282] 0 containers: []
	W1210 07:32:02.032315    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:02.032315    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:02.032315    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:02.096900    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:02.096900    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:02.136137    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:02.136137    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:02.227732    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:02.216961    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.218197    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.219273    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.220321    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.221438    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:02.216961    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.218197    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.219273    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.220321    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.221438    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:02.227732    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:02.227732    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:02.255236    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:02.255236    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:03.670542    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.34.3 | docker load": (2.3803916s)
	I1210 07:32:03.670542    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 from cache
	I1210 07:32:03.670542    2240 cache_images.go:125] Successfully loaded all cached images
	I1210 07:32:03.670542    2240 cache_images.go:94] duration metric: took 18.3531776s to LoadCachedImages
	I1210 07:32:03.670542    2240 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 docker true true} ...
	I1210 07:32:03.670542    2240 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-648600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml}
	I1210 07:32:03.674057    2240 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 07:32:03.753844    2240 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1210 07:32:03.753844    2240 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:32:03.753844    2240 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-648600 NodeName:custom-flannel-648600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:32:03.753844    2240 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "custom-flannel-648600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:32:03.758233    2240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 07:32:03.772950    2240 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.3': No such file or directory
	
	Initiating transfer...
	I1210 07:32:03.777455    2240 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.3
	I1210 07:32:03.790145    2240 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet.sha256
	I1210 07:32:03.790145    2240 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 07:32:03.790145    2240 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
	I1210 07:32:03.796039    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:32:03.796814    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm
	I1210 07:32:03.796843    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl
	I1210 07:32:03.817843    2240 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubeadm': No such file or directory
	I1210 07:32:03.818011    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubeadm --> /var/lib/minikube/binaries/v1.34.3/kubeadm (74027192 bytes)
	I1210 07:32:03.818298    2240 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubectl': No such file or directory
	I1210 07:32:03.818803    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubectl --> /var/lib/minikube/binaries/v1.34.3/kubectl (60563640 bytes)
	I1210 07:32:03.822978    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet
	I1210 07:32:03.833074    2240 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubelet': No such file or directory
	I1210 07:32:03.833638    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubelet --> /var/lib/minikube/binaries/v1.34.3/kubelet (59203876 bytes)
	I1210 07:32:05.838364    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:32:05.850364    2240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1210 07:32:05.870151    2240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 07:32:05.891336    2240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1210 07:32:05.915010    2240 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:32:05.922767    2240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:32:05.942185    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:32:06.099167    2240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:32:06.121581    2240 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600 for IP: 192.168.85.2
	I1210 07:32:06.121613    2240 certs.go:195] generating shared ca certs ...
	I1210 07:32:06.121640    2240 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.121920    2240 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 07:32:06.122447    2240 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 07:32:06.122578    2240 certs.go:257] generating profile certs ...
	I1210 07:32:06.122578    2240 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.key
	I1210 07:32:06.122578    2240 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.crt with IP's: []
	I1210 07:32:06.321440    2240 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.crt ...
	I1210 07:32:06.321440    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.crt: {Name:mk30a4977cc0d8ffd50678b3c23caa1e53531dd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.322223    2240 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.key ...
	I1210 07:32:06.322223    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.key: {Name:mke10982a653bbe15c8edebf2f43dc216f9268be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.323200    2240 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba
	I1210 07:32:06.323200    2240 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1210 07:32:06.341062    2240 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba ...
	I1210 07:32:06.341062    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba: {Name:mk0e9e825524eecc7aedfd18bb3bfe0b08c0466c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.342014    2240 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba ...
	I1210 07:32:06.342014    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba: {Name:mk42b80e536f4c7e07cd83fa60afbb5af1e6e8b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.342947    2240 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt
	I1210 07:32:06.354920    2240 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key
	I1210 07:32:06.355812    2240 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key
	I1210 07:32:06.355812    2240 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt with IP's: []
	I1210 07:32:06.438517    2240 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt ...
	I1210 07:32:06.438517    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt: {Name:mk49d63357d91f886b5db1adca8a8959ac8a2637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.439596    2240 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key ...
	I1210 07:32:06.439596    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key: {Name:mkd00fe816a16ba7636ee1faff5584095510b505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.454147    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 07:32:06.454968    2240 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 07:32:06.454968    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 07:32:06.455228    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 07:32:06.455417    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 07:32:06.455581    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 07:32:06.455768    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 07:32:06.456703    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:32:06.490234    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:32:06.516382    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:32:06.546895    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:32:06.579157    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1210 07:32:06.611194    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:32:06.642582    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:32:06.673947    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:32:06.702762    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 07:32:06.734932    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 07:32:06.763895    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:32:06.794884    2240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:32:06.824804    2240 ssh_runner.go:195] Run: openssl version
	I1210 07:32:06.839620    2240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.863187    2240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 07:32:06.881235    2240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.889982    2240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.896266    2240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.945361    2240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:32:06.965592    2240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11304.pem /etc/ssl/certs/51391683.0
	I1210 07:32:06.982615    2240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.000345    2240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 07:32:07.019650    2240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.028440    2240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.032681    2240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.080664    2240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:32:07.098781    2240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/113042.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:32:07.119820    2240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.138968    2240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:32:07.157588    2240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.166110    2240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.169123    2240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.218939    2240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:32:07.238245    2240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:32:07.255844    2240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:32:07.263714    2240 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:32:07.263714    2240 kubeadm.go:401] StartCluster: {Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCore
DNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:32:07.267520    2240 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 07:32:07.300048    2240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:32:07.317060    2240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:32:07.333647    2240 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:32:07.337744    2240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:32:07.353638    2240 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:32:07.353638    2240 kubeadm.go:158] found existing configuration files:
	
	I1210 07:32:07.357869    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:32:07.371538    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:32:07.375620    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:32:07.392582    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:32:07.408459    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:32:07.412872    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:32:07.431340    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:32:07.446697    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:32:07.451332    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:32:07.472431    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	W1210 07:32:03.602967    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:04.810034    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:04.838035    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:04.888039    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.888039    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:04.892025    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:04.955032    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.955032    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:04.959038    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:04.995031    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.995031    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:04.999034    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:05.035036    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.035036    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:05.040047    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:05.079034    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.079034    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:05.084038    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:05.123032    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.123032    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:05.128035    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:05.165033    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.165033    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:05.169028    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:05.205183    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.205183    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:05.205183    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:05.205183    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:05.248358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:05.248358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:05.349366    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:05.339165    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.340548    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.341613    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.342697    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.343919    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:05.339165    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.340548    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.341613    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.342697    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.343919    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:05.349366    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:05.349366    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:05.384377    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:05.384377    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:05.439383    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:05.439383    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:08.021198    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:08.045549    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:08.076568    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.076568    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:08.082429    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:08.113514    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.113514    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:08.117280    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:08.145243    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.145243    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:08.151846    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:08.182475    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.182475    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:08.186570    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:08.214500    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.214554    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:08.218698    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:08.250229    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.250229    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:08.254493    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:08.298394    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.298394    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:08.302457    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:08.331561    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.331561    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:08.331561    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:08.331561    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:08.368913    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:08.368913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 07:32:07.487983    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:32:07.492242    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:32:07.510557    2240 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:32:07.626646    2240 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 07:32:07.630270    2240 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1210 07:32:07.725615    2240 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1210 07:32:08.453343    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:08.442562    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.443722    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.445918    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.447466    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.448822    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:08.442562    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.443722    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.445918    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.447466    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.448822    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:08.453378    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:08.453417    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:08.488219    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:08.488219    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:08.533777    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:08.533777    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:11.100898    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:11.123310    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:11.154369    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.154369    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:11.158211    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:11.188349    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.188419    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:11.191999    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:11.218233    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.218263    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:11.222177    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:11.248157    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.248157    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:11.252075    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:11.280934    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.280934    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:11.284871    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:11.316173    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.316225    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:11.320150    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:11.350432    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.350494    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:11.354282    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:11.381767    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.381819    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:11.381819    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:11.381874    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:11.447079    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:11.447079    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:11.485987    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:11.485987    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:11.568313    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:11.555927    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.557482    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.559214    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.561941    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.562681    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:11.555927    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.557482    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.559214    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.561941    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.562681    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:11.568365    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:11.568408    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:11.599474    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:11.599518    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:13.641314    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:14.165429    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:14.189363    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:14.220411    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.220478    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:14.223878    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:14.253748    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.253798    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:14.257409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:14.288235    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.288235    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:14.291689    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:14.323349    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.323349    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:14.326680    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:14.355227    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.355227    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:14.358704    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:14.389648    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.389648    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:14.393032    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:14.424212    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.424212    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:14.427425    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:14.457834    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.457834    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:14.457834    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:14.457834    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:14.486053    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:14.486053    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:14.538138    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:14.538138    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:14.601542    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:14.601542    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:14.638885    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:14.638885    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:14.724482    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:14.715398    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.716451    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.717228    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.719965    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.720942    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:14.715398    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.716451    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.717228    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.719965    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.720942    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:17.229775    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:17.254115    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:17.287113    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.287113    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:17.292389    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:17.321661    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.321661    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:17.325615    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:17.360140    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.360140    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:17.366346    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:17.402963    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.402963    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:17.406830    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:17.436210    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.436210    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:17.440638    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:17.468315    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.468315    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:17.473002    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:17.516057    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.516057    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:17.519835    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:17.546705    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.546705    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:17.546705    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:17.546705    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:17.575272    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:17.575272    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:17.635882    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:17.635882    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:17.702984    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:17.702984    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:17.738444    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:17.738444    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:17.826329    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:17.816355    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.817532    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.818909    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.821510    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.822595    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:17.816355    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.817532    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.818909    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.821510    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.822595    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:20.331491    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:20.356562    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:20.393733    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.393733    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:20.397542    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:20.424969    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.424969    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:20.430097    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:20.461163    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.461163    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:20.464553    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:20.496041    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.496041    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:20.500386    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:20.528481    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.528481    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:20.533192    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:20.563678    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.563678    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:20.567914    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:20.595909    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.595909    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:20.601427    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:20.633125    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.633125    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:20.633125    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:20.633125    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:20.698742    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:20.698742    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:20.738675    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:20.738675    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:20.832925    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:20.823171    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.824395    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.825664    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.826500    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.828755    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:20.823171    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.824395    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.825664    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.826500    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.828755    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:20.833019    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:20.833050    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:20.863741    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:20.863802    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:23.679657    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:23.424742    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:23.449719    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:23.484921    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.484982    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:23.488818    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:23.520632    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.520718    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:23.525648    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:23.557856    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.557856    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:23.561789    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:23.593782    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.593782    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:23.596770    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:23.629689    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.629689    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:23.633972    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:23.677648    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.677648    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:23.681665    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:23.708735    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.708735    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:23.712484    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:23.742324    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.742324    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:23.742324    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:23.742324    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:23.809315    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:23.809315    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:23.849820    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:23.849820    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:23.932812    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:23.923514    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.925667    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.926732    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.928140    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.929123    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:23.923514    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.925667    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.926732    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.928140    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.929123    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:23.932860    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:23.932896    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:23.962977    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:23.962977    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:26.517198    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:26.545066    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:26.577323    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.577323    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:26.581824    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:26.621178    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.621178    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:26.624162    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:26.657711    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.657711    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:26.661872    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:26.690869    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.690869    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:26.693873    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:26.720949    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.720949    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:26.724289    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:26.757254    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.757254    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:26.761433    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:26.788617    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.788617    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:26.792015    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:26.820229    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.820229    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:26.820229    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:26.820229    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:26.886805    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:26.886805    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:26.926531    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:26.926531    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:27.014343    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:27.001829    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.003812    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.004706    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.007231    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.008445    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:27.001829    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.003812    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.004706    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.007231    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.008445    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:27.014420    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:27.014490    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:27.043375    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:27.043375    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:29.223517    2240 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1210 07:32:29.223517    2240 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:32:29.223517    2240 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:32:29.223517    2240 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:32:29.224269    2240 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:32:29.224467    2240 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:32:29.229027    2240 out.go:252]   - Generating certificates and keys ...
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:32:29.229660    2240 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:32:29.229827    2240 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:32:29.229911    2240 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:32:29.229911    2240 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-648600 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 07:32:29.229911    2240 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:32:29.230468    2240 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-648600 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 07:32:29.230658    2240 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:32:29.230768    2240 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:32:29.230900    2240 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:32:29.231503    2240 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:32:29.231582    2240 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:32:29.231582    2240 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:32:29.234181    2240 out.go:252]   - Booting up control plane ...
	I1210 07:32:29.234181    2240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:32:29.234181    2240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:32:29.234181    2240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:32:29.234702    2240 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:32:29.234874    2240 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:32:29.235782    2240 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:32:29.235782    2240 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002366911s
	I1210 07:32:29.235782    2240 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 07:32:29.235782    2240 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1210 07:32:29.236404    2240 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 07:32:29.236404    2240 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 07:32:29.236404    2240 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 12.235267696s
	I1210 07:32:29.236992    2240 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 12.434241439s
	I1210 07:32:29.236992    2240 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 14.5023353s
	I1210 07:32:29.236992    2240 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 07:32:29.236992    2240 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 07:32:29.237590    2240 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 07:32:29.237590    2240 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-648600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 07:32:29.237590    2240 kubeadm.go:319] [bootstrap-token] Using token: a4ld74.20ve6i3rm5ksexxo
	I1210 07:32:29.239648    2240 out.go:252]   - Configuring RBAC rules ...
	I1210 07:32:29.239648    2240 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 07:32:29.240674    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 07:32:29.240944    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 07:32:29.241383    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 07:32:29.241649    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 07:32:29.241668    2240 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 07:32:29.241668    2240 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 07:32:29.241668    2240 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 07:32:29.241668    2240 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 07:32:29.242197    2240 kubeadm.go:319] 
	I1210 07:32:29.242265    2240 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 07:32:29.242265    2240 kubeadm.go:319] 
	I1210 07:32:29.242265    2240 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 07:32:29.242265    2240 kubeadm.go:319] 
	I1210 07:32:29.242265    2240 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 07:32:29.242265    2240 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 07:32:29.242265    2240 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 07:32:29.242265    2240 kubeadm.go:319] 
	I1210 07:32:29.242850    2240 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 07:32:29.242850    2240 kubeadm.go:319] 
	I1210 07:32:29.242850    2240 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 07:32:29.242850    2240 kubeadm.go:319] 
	I1210 07:32:29.242850    2240 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 07:32:29.242850    2240 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 07:32:29.242850    2240 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 07:32:29.242850    2240 kubeadm.go:319] 
	I1210 07:32:29.243436    2240 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 07:32:29.243436    2240 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 07:32:29.243436    2240 kubeadm.go:319] 
	I1210 07:32:29.243436    2240 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token a4ld74.20ve6i3rm5ksexxo \
	I1210 07:32:29.243436    2240 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2501b4ef72a9792bee16677b10e6bb41bd6980e44395cdcd843b1bcd1dba3c3b \
	I1210 07:32:29.243436    2240 kubeadm.go:319] 	--control-plane 
	I1210 07:32:29.243436    2240 kubeadm.go:319] 
	I1210 07:32:29.244018    2240 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 07:32:29.244018    2240 kubeadm.go:319] 
	I1210 07:32:29.244018    2240 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token a4ld74.20ve6i3rm5ksexxo \
	I1210 07:32:29.244018    2240 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2501b4ef72a9792bee16677b10e6bb41bd6980e44395cdcd843b1bcd1dba3c3b 
	I1210 07:32:29.244018    2240 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1210 07:32:29.246745    2240 out.go:179] * Configuring testdata\kube-flannel.yaml (Container Networking Interface) ...
	I1210 07:32:29.266121    2240 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1210 07:32:29.270492    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1210 07:32:29.280075    2240 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1210 07:32:29.280075    2240 ssh_runner.go:362] scp testdata\kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4578 bytes)
	I1210 07:32:29.314572    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 07:32:29.754597    2240 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 07:32:29.759607    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-648600 minikube.k8s.io/updated_at=2025_12_10T07_32_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=custom-flannel-648600 minikube.k8s.io/primary=true
	I1210 07:32:29.759607    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:29.770603    2240 ops.go:34] apiserver oom_adj: -16
	I1210 07:32:29.895974    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:30.395328    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:30.896828    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:31.396414    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:31.896200    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:32.396778    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:29.599594    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:29.627372    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:29.659982    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.659982    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:29.662983    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:29.694702    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.694702    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:29.700318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:29.732602    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.732602    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:29.735594    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:29.769602    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.769602    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:29.773601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:29.805199    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.805199    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:29.808179    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:29.838578    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.838578    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:29.843641    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:29.878051    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.878051    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:29.881052    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:29.921782    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.921782    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:29.921782    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:29.921782    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:29.991328    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:29.991328    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:30.030358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:30.031358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:30.117974    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:30.107541    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.108605    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.109589    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.110674    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.111579    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:30.107541    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.108605    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.109589    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.110674    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.111579    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:30.118027    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:30.118027    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:30.147934    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:30.147934    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:32.704372    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:32.727813    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:32.762114    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.762228    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:32.767248    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:32.801905    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.801968    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:32.805939    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:32.836433    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.836579    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:32.840369    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:32.870265    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.870265    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:32.874049    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:32.904540    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.904540    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:32.908658    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:32.937325    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.937407    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:32.941191    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:32.974829    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.974893    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:32.980307    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:33.012207    1436 logs.go:282] 0 containers: []
	W1210 07:32:33.012268    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:33.012288    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:33.012288    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:33.062151    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:33.062151    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:33.126084    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:33.126084    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:33.164564    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:33.164564    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:33.252175    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:33.238911    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.239860    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.243396    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.244685    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.245447    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:33.238911    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.239860    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.243396    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.244685    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.245447    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:33.252175    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:33.252175    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:32.894984    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:33.397040    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:33.895777    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:34.084987    2240 kubeadm.go:1114] duration metric: took 4.3302518s to wait for elevateKubeSystemPrivileges
	I1210 07:32:34.085013    2240 kubeadm.go:403] duration metric: took 26.8208803s to StartCluster
	I1210 07:32:34.085095    2240 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:34.085299    2240 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:32:34.087295    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:34.088397    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 07:32:34.088397    2240 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:32:34.088932    2240 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:32:34.089115    2240 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-648600"
	I1210 07:32:34.089115    2240 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-648600"
	I1210 07:32:34.089115    2240 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-648600"
	I1210 07:32:34.089115    2240 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-648600"
	I1210 07:32:34.089272    2240 host.go:66] Checking if "custom-flannel-648600" exists ...
	I1210 07:32:34.089454    2240 config.go:182] Loaded profile config "custom-flannel-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:32:34.091048    2240 out.go:179] * Verifying Kubernetes components...
	I1210 07:32:34.099313    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:32:34.100384    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:32:34.101389    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:32:34.165121    2240 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-648600"
	I1210 07:32:34.165121    2240 host.go:66] Checking if "custom-flannel-648600" exists ...
	I1210 07:32:34.166107    2240 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:32:34.174109    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:32:34.177116    2240 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:32:34.177116    2240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:32:34.181109    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:32:34.228110    2240 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:32:34.228110    2240 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:32:34.231111    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:32:34.232110    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:32:34.295102    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:32:34.361698    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 07:32:34.577307    2240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:32:34.743911    2240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:32:34.748484    2240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:32:35.145540    2240 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1210 07:32:35.149854    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:32:35.210514    2240 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-648600" to be "Ready" ...
	I1210 07:32:35.684992    2240 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-648600" context rescaled to 1 replicas
	I1210 07:32:35.860846    2240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1123448s)
	I1210 07:32:35.863841    2240 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1210 07:32:35.869842    2240 addons.go:530] duration metric: took 1.7814171s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1210 07:32:37.217134    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	W1210 07:32:33.712552    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:35.789401    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:35.810140    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:35.846049    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.846049    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:35.850173    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:35.881840    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.881840    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:35.884841    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:35.913190    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.913190    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:35.916698    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:35.953160    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.953160    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:35.956661    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:35.990725    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.990725    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:35.994362    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:36.027153    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.027153    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:36.031157    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:36.060142    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.060142    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:36.063139    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:36.096214    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.096291    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:36.096291    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:36.096291    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:36.136455    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:36.136455    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:36.228827    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:36.215892    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.217040    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.218413    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.219992    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.221006    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:36.215892    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.217040    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.218413    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.219992    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.221006    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:36.228910    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:36.228944    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:36.260979    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:36.261040    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:36.321946    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:36.321946    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:32:39.747934    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	W1210 07:32:42.215582    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	I1210 07:32:38.893525    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:38.918010    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:38.951682    1436 logs.go:282] 0 containers: []
	W1210 07:32:38.951682    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:38.954817    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:38.986714    1436 logs.go:282] 0 containers: []
	W1210 07:32:38.986714    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:38.992805    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:39.024242    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.024242    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:39.028333    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:39.057504    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.057504    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:39.063178    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:39.093362    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.093362    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:39.097488    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:39.130652    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.130690    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:39.133596    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:39.163556    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.163556    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:39.168915    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:39.202587    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.202587    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:39.202587    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:39.202587    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:39.268647    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:39.268647    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:39.308297    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:39.308297    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:39.438181    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:39.395105    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.396191    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.398354    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.399763    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.401460    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:39.395105    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.396191    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.398354    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.399763    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.401460    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:39.438181    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:39.438181    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:39.467128    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:39.467176    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:42.023591    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:42.047765    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:42.080166    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.080166    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:42.084928    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:42.114905    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.114905    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:42.118820    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:42.148212    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.148212    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:42.151728    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:42.182256    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.182256    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:42.185843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:42.216232    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.216276    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:42.219555    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:42.249214    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.249214    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:42.253469    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:42.281977    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.281977    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:42.285971    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:42.313212    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.314210    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:42.314210    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:42.314210    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:42.382226    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:42.382226    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:42.424358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:42.424358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:42.509116    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:42.500360    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.501554    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.503040    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.504307    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.505418    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:42.500360    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.501554    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.503040    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.504307    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.505418    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:42.509116    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:42.509116    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:42.536096    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:42.536096    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:44.217341    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	I1210 07:32:45.217929    2240 node_ready.go:49] node "custom-flannel-648600" is "Ready"
	I1210 07:32:45.217929    2240 node_ready.go:38] duration metric: took 10.0071872s for node "custom-flannel-648600" to be "Ready" ...
	I1210 07:32:45.217929    2240 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:32:45.221913    2240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:45.241224    2240 api_server.go:72] duration metric: took 11.1520714s to wait for apiserver process to appear ...
	I1210 07:32:45.241248    2240 api_server.go:88] waiting for apiserver healthz status ...
	I1210 07:32:45.241297    2240 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58199/healthz ...
	I1210 07:32:45.255531    2240 api_server.go:279] https://127.0.0.1:58199/healthz returned 200:
	ok
	I1210 07:32:45.259632    2240 api_server.go:141] control plane version: v1.34.3
	I1210 07:32:45.259696    2240 api_server.go:131] duration metric: took 18.4479ms to wait for apiserver health ...
	I1210 07:32:45.259716    2240 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 07:32:45.268791    2240 system_pods.go:59] 7 kube-system pods found
	I1210 07:32:45.268849    2240 system_pods.go:61] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.268849    2240 system_pods.go:61] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.268894    2240 system_pods.go:74] duration metric: took 9.14ms to wait for pod list to return data ...
	I1210 07:32:45.268935    2240 default_sa.go:34] waiting for default service account to be created ...
	I1210 07:32:45.273316    2240 default_sa.go:45] found service account: "default"
	I1210 07:32:45.273353    2240 default_sa.go:55] duration metric: took 4.4181ms for default service account to be created ...
	I1210 07:32:45.273353    2240 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 07:32:45.280767    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:45.280945    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.280945    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.281064    2240 retry.go:31] will retry after 250.377545ms: missing components: kube-dns
	I1210 07:32:45.539061    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:45.539616    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.539616    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.539718    2240 retry.go:31] will retry after 289.337772ms: missing components: kube-dns
	I1210 07:32:45.840329    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:45.840329    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.840329    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.840528    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.840528    2240 retry.go:31] will retry after 309.196772ms: missing components: kube-dns
	I1210 07:32:46.157293    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:46.157293    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:46.157293    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:46.157293    2240 retry.go:31] will retry after 407.04525ms: missing components: kube-dns
	I1210 07:32:46.592154    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:46.592265    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:46.592265    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:46.592297    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:46.592297    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:46.592318    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:46.592318    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:46.592318    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:46.592318    2240 retry.go:31] will retry after 495.94184ms: missing components: kube-dns
	I1210 07:32:47.094557    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:47.094557    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:47.094557    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:47.094557    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:47.094557    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:47.095074    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:47.095074    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:47.095074    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Running
	I1210 07:32:47.095074    2240 retry.go:31] will retry after 778.892273ms: missing components: kube-dns
	W1210 07:32:43.745046    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:45.087059    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:45.110662    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:45.142133    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.142133    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:45.146341    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:45.178232    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.178232    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:45.182428    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:45.211507    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.211507    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:45.215400    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:45.245805    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.246346    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:45.251790    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:45.299793    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.299793    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:45.304394    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:45.332689    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.332689    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:45.338438    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:45.371989    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.372039    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:45.376951    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:45.411498    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.411558    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:45.411558    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:45.411617    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:45.488591    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:45.489591    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:45.529135    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:45.529135    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:45.627238    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:45.616715    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.617907    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.618773    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.622470    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.623968    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:45.616715    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.617907    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.618773    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.622470    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.623968    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:45.627238    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:45.627238    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:45.659505    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:45.659505    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:48.224164    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:48.247748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:48.276146    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.276253    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:48.279224    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:48.307561    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.307587    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:48.311247    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:48.342268    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.342268    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:48.346481    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:48.379504    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.379504    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:48.384265    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:47.881744    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:47.881744    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:47.881744    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Running
	I1210 07:32:47.882297    2240 retry.go:31] will retry after 913.098856ms: missing components: kube-dns
	I1210 07:32:48.802046    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:48.802046    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Running
	I1210 07:32:48.802046    2240 system_pods.go:126] duration metric: took 3.5286376s to wait for k8s-apps to be running ...
	I1210 07:32:48.802046    2240 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 07:32:48.807470    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:32:48.825598    2240 system_svc.go:56] duration metric: took 23.5517ms WaitForService to wait for kubelet
	I1210 07:32:48.825598    2240 kubeadm.go:587] duration metric: took 14.7364354s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:32:48.825689    2240 node_conditions.go:102] verifying NodePressure condition ...
	I1210 07:32:48.831503    2240 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1210 07:32:48.831503    2240 node_conditions.go:123] node cpu capacity is 16
	I1210 07:32:48.831503    2240 node_conditions.go:105] duration metric: took 5.8138ms to run NodePressure ...
	I1210 07:32:48.831503    2240 start.go:242] waiting for startup goroutines ...
	I1210 07:32:48.831503    2240 start.go:247] waiting for cluster config update ...
	I1210 07:32:48.831503    2240 start.go:256] writing updated cluster config ...
	I1210 07:32:48.837195    2240 ssh_runner.go:195] Run: rm -f paused
	I1210 07:32:48.844148    2240 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:32:48.853005    2240 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dhgpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.864384    2240 pod_ready.go:94] pod "coredns-66bc5c9577-dhgpj" is "Ready"
	I1210 07:32:48.864472    2240 pod_ready.go:86] duration metric: took 11.4282ms for pod "coredns-66bc5c9577-dhgpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.867887    2240 pod_ready.go:83] waiting for pod "etcd-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.876367    2240 pod_ready.go:94] pod "etcd-custom-flannel-648600" is "Ready"
	I1210 07:32:48.876367    2240 pod_ready.go:86] duration metric: took 8.4794ms for pod "etcd-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.880884    2240 pod_ready.go:83] waiting for pod "kube-apiserver-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.888453    2240 pod_ready.go:94] pod "kube-apiserver-custom-flannel-648600" is "Ready"
	I1210 07:32:48.888453    2240 pod_ready.go:86] duration metric: took 7.5694ms for pod "kube-apiserver-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.891939    2240 pod_ready.go:83] waiting for pod "kube-controller-manager-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:49.254863    2240 pod_ready.go:94] pod "kube-controller-manager-custom-flannel-648600" is "Ready"
	I1210 07:32:49.255015    2240 pod_ready.go:86] duration metric: took 363.0699ms for pod "kube-controller-manager-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:49.454047    2240 pod_ready.go:83] waiting for pod "kube-proxy-vrrgr" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:49.854254    2240 pod_ready.go:94] pod "kube-proxy-vrrgr" is "Ready"
	I1210 07:32:49.854329    2240 pod_ready.go:86] duration metric: took 400.2758ms for pod "kube-proxy-vrrgr" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.054101    2240 pod_ready.go:83] waiting for pod "kube-scheduler-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.453713    2240 pod_ready.go:94] pod "kube-scheduler-custom-flannel-648600" is "Ready"
	I1210 07:32:50.453713    2240 pod_ready.go:86] duration metric: took 399.6056ms for pod "kube-scheduler-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.453713    2240 pod_ready.go:40] duration metric: took 1.6095401s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:32:50.552047    2240 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 07:32:50.555856    2240 out.go:179] * Done! kubectl is now configured to use "custom-flannel-648600" cluster and "default" namespace by default
	I1210 07:32:48.417490    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.417490    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:48.420482    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:48.463340    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.463340    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:48.466961    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:48.498101    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.498101    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:48.501771    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:48.532099    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.532099    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:48.532099    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:48.532099    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:48.612165    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:48.602526   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.604459   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.605839   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.608058   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.609316   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:48.602526   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.604459   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.605839   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.608058   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.609316   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:48.612165    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:48.612165    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:48.639467    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:48.639467    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:48.708307    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:48.708378    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:48.769132    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:48.769193    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:51.313991    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:51.338965    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:51.379596    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.379666    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:51.384637    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:51.439084    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.439084    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:51.443082    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:51.481339    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.481375    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:51.485798    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:51.515086    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.515086    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:51.519086    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:51.549657    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.549745    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:51.553762    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:51.594636    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.594636    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:51.601112    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:51.634850    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.634897    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:51.638417    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:51.668658    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.668658    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:51.668658    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:51.668658    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:51.743421    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:51.743421    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:51.785980    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:51.785980    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:51.881612    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:51.875319   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.876439   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.877390   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.878342   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.879220   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:51.875319   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.876439   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.877390   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.878342   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.879220   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:51.881612    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:51.881612    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:51.915211    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:51.915211    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:53.781958    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:54.477323    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:54.503322    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:54.543324    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.543324    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:54.547318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:54.584329    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.584329    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:54.588316    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:54.620313    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.620313    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:54.623313    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:54.656331    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.656331    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:54.662335    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:54.698319    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.698319    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:54.702320    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:54.730323    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.730323    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:54.734335    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:54.767329    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.767329    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:54.772326    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:54.807324    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.807324    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:54.807324    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:54.807324    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:54.885116    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:54.885116    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:54.922078    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:54.922078    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:55.025433    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:55.017851   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.018872   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.019791   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.020881   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.021812   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:55.017851   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.018872   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.019791   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.020881   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.021812   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:55.025433    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:55.025433    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:55.062949    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:55.062949    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:57.627400    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:57.652685    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:57.682605    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.682695    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:57.687397    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:57.715588    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.715643    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:57.719155    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:57.746386    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.746433    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:57.751074    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:57.786162    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.786225    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:57.790161    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:57.821543    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.821543    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:57.825865    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:57.854873    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.854873    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:57.858370    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:57.908764    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.908764    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:57.912923    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:57.943110    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.943156    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:57.943156    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:57.943220    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:58.044764    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:58.032727   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.034310   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.035457   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.038242   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.039578   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:58.032727   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.034310   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.035457   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.038242   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.039578   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:58.044764    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:58.044764    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:58.074136    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:58.074136    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:58.130739    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:58.130739    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:58.198319    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:58.198319    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:00.746286    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:00.773024    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:00.801991    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.801991    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:00.806103    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:00.839474    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.839538    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:00.843748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:00.872704    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.872704    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:00.879471    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:00.910099    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.910099    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:00.913675    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:00.942535    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.942587    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:00.946706    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:00.978075    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.978075    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:00.981585    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:01.010831    1436 logs.go:282] 0 containers: []
	W1210 07:33:01.010862    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:01.014542    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:01.046630    1436 logs.go:282] 0 containers: []
	W1210 07:33:01.046630    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:01.046630    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:01.046630    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:01.110794    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:01.110794    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:01.152129    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:01.152129    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:01.244044    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:01.232452   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.233728   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.234672   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.238201   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.239249   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:01.232452   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.233728   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.234672   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.238201   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.239249   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:01.244044    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:01.244044    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:01.278465    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:01.278465    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:33:03.818627    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:03.833114    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:03.855801    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:03.886510    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.886573    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:03.890099    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:03.920839    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.920839    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:03.927061    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:03.956870    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.956870    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:03.960568    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:03.992698    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.992784    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:03.996483    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:04.027029    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.027149    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:04.030240    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:04.063615    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.063615    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:04.067578    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:04.097874    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.097921    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:04.102194    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:04.133751    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.133751    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:04.133751    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:04.133751    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:04.200457    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:04.200457    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:04.240408    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:04.240408    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:04.321404    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:04.310792   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.311874   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.312796   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.314967   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.316599   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:04.310792   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.311874   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.312796   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.314967   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.316599   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:04.321404    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:04.321404    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:04.348691    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:04.348788    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:06.910838    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:06.942433    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:06.977118    1436 logs.go:282] 0 containers: []
	W1210 07:33:06.977156    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:06.981007    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:07.010984    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.010984    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:07.015418    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:07.044766    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.044766    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:07.048710    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:07.081347    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.081347    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:07.085264    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:07.120524    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.120524    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:07.125158    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:07.162231    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.162231    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:07.167511    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:07.199783    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.199783    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:07.203843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:07.237945    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.237945    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:07.237945    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:07.237945    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:07.303014    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:07.303014    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:07.339790    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:07.339790    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:07.433533    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:07.422610   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.423608   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.426487   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.427518   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.428665   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:07.422610   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.423608   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.426487   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.427518   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.428665   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:07.433578    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:07.433622    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:07.463534    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:07.463534    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:10.019483    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:10.042553    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:10.075861    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.075861    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:10.079883    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:10.112806    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.112855    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:10.118076    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:10.149529    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.149529    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:10.154764    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:10.183943    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.183943    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:10.188277    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:10.225075    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.225109    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:10.229148    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:10.258752    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.258831    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:10.262260    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:10.290375    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.290375    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:10.294114    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:10.324184    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.324184    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:10.324184    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:10.324257    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:10.389060    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:10.389060    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:10.428762    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:10.428762    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:10.512419    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:10.502106   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.503175   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.504155   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.507624   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.508833   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:10.502106   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.503175   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.504155   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.507624   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.508833   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:10.512419    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:10.512419    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:10.539151    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:10.539151    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:13.096376    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:13.120463    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:13.154821    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.154821    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:13.158241    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:13.186136    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.186172    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:13.190126    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:13.217850    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.217850    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:13.220856    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:13.254422    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.254422    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:13.258405    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:13.290565    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.290650    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:13.294141    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:13.324205    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.324205    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:13.327944    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:13.359148    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.359148    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:13.363435    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:13.394783    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.394783    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:13.394783    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:13.394783    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:33:13.858746    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:13.472122    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:13.472122    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:13.512554    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:13.512554    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:13.606866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:13.598440   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.599732   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.600869   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.602040   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.603324   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:13.598440   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.599732   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.600869   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.602040   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.603324   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:13.606866    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:13.606866    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:13.640509    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:13.640509    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:16.200969    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:16.227853    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:16.259466    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.259503    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:16.263863    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:16.305661    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.305714    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:16.309344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:16.349702    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.349702    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:16.354239    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:16.389642    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.389669    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:16.393404    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:16.422749    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.422749    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:16.428043    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:16.462871    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.462871    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:16.466863    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:16.500036    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.500036    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:16.505217    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:16.545533    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.545563    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:16.545563    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:16.545640    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:16.616718    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:16.616718    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:16.662358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:16.662414    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:16.771496    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:16.759784   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.760601   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.764427   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.766044   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.767471   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:16.759784   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.760601   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.764427   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.766044   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.767471   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:16.771539    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:16.771539    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:16.802169    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:16.802169    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:19.361839    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:19.384627    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:19.418054    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.418054    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:19.423334    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:19.449315    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.450326    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:19.453336    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:19.479318    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.479318    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:19.483409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:19.515568    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.515568    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:19.518948    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:19.547403    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.547403    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:19.550914    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:19.582586    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.582643    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:19.586506    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:19.617655    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.617655    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:19.621651    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:19.653692    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.653797    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:19.653820    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:19.653820    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:19.720756    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:19.720756    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:19.788168    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:19.788168    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:19.825175    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:19.825175    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:19.937176    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:19.910996   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912147   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912626   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.933700   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.934740   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:19.910996   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912147   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912626   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.933700   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.934740   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:19.938191    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:19.938191    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:22.472081    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:22.499318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:22.535642    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.535642    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:22.540234    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:22.575580    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.575580    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:22.578579    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:22.611585    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.612584    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:22.615587    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:22.645600    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.645600    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:22.649593    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:22.680588    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.680588    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:22.684584    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:22.713587    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.713587    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:22.716592    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:22.745591    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.745591    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:22.748591    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:22.777133    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.777133    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:22.777133    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:22.777133    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:22.866913    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:22.856823   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.858179   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.859339   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860428   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860817   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:22.856823   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.858179   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.859339   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860428   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860817   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:22.866913    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:22.866913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:22.895817    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:22.895817    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:22.963449    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:22.964449    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:23.024022    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:23.024022    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:33:23.891822    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:25.581257    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:25.606450    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:25.638465    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.638465    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:25.641459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:25.675461    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.675461    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:25.678460    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:25.712472    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.712472    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:25.715460    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:25.742469    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.742469    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:25.745459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:25.778468    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.778468    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:25.782466    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:25.810470    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.810470    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:25.813459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:25.842959    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.843962    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:25.846951    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:25.879265    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.879265    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:25.879265    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:25.879265    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:25.923140    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:25.923140    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:26.006825    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:25.994746   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.996044   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.997646   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.998827   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:26.000023   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:25.994746   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.996044   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.997646   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.998827   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:26.000023   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:26.006825    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:26.006825    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:26.036172    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:26.036172    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:26.088180    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:26.088180    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:28.665087    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:28.689823    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:28.725678    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.725714    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:28.728663    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:28.759105    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.759146    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:28.763209    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:28.794743    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.794743    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:28.798927    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:28.832979    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.832979    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:28.836972    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:28.869676    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.869676    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:28.874394    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:28.909690    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.909690    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:28.914703    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:28.948685    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.948685    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:28.951687    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:28.983688    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.983688    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:28.983688    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:28.983688    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:29.038702    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:29.038702    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:29.102687    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:29.102687    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:29.157695    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:29.157695    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:29.254070    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:29.238189   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.239080   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.244117   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.246991   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.248197   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:29.238189   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.239080   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.244117   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.246991   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.248197   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:29.254070    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:29.254070    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:31.790873    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:31.815324    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:31.848719    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.848719    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:31.853126    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:31.894569    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.894618    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:31.901660    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:31.945924    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.945924    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:31.949930    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:31.980922    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.980922    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:31.983920    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:32.015920    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.015920    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:32.018924    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:32.055014    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.055014    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:32.059907    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:32.088299    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.088299    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:32.091301    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:32.122373    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.122373    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:32.122373    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:32.122373    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:32.200241    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:32.200241    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:32.235857    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:32.236857    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:32.346052    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:32.333533   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.334563   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.336148   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.337055   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.339822   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:32.333533   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.334563   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.336148   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.337055   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.339822   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:32.346052    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:32.346052    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:32.374360    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:32.374360    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:33:33.924414    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:34.931799    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:34.953865    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:34.983147    1436 logs.go:282] 0 containers: []
	W1210 07:33:34.983147    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:34.986833    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:35.017888    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.017888    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:35.021662    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:35.051231    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.051231    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:35.055612    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:35.089316    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.089316    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:35.093193    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:35.121682    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.121682    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:35.126091    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:35.158874    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.158874    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:35.165874    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:35.201117    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.201117    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:35.206353    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:35.236228    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.236228    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:35.236228    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:35.236228    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:35.267932    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:35.267994    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:35.320951    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:35.320951    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:35.383537    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:35.383589    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:35.425468    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:35.425468    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:35.528144    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:35.516901   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.517862   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.520128   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.521034   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.522130   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:35.516901   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.517862   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.520128   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.521034   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.522130   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:38.032492    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:38.054909    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:38.083957    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.083957    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:38.087695    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:38.116008    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.116008    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:38.121353    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:38.151236    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.151236    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:38.157561    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:38.191692    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.191739    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:38.195638    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:38.232952    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.232952    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:38.240283    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:38.267392    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.267392    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:38.270392    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:38.302982    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.302982    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:38.306527    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:38.337370    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.337370    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:38.337663    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:38.337663    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:38.378149    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:38.378149    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:38.496679    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:38.485115   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.486129   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.488286   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.489938   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.491114   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:38.485115   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.486129   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.488286   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.489938   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.491114   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:38.496679    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:38.496679    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:38.523508    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:38.524031    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:38.575827    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:38.575926    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:41.142591    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:41.169193    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:41.202128    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.202197    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:41.205840    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:41.232108    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.232108    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:41.236042    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:41.266240    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.266240    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:41.270256    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:41.299391    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.299914    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:41.305198    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:41.334815    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.334888    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:41.338221    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:41.366830    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.366830    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:41.371846    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:41.403239    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.403307    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:41.406504    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:41.435444    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.435507    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:41.435507    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:41.435507    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:41.495280    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:41.495280    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:41.540098    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:41.540098    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:41.631123    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:41.619480   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.620700   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.621495   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.624397   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.626223   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:41.619480   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.620700   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.621495   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.624397   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.626223   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:41.631123    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:41.631123    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:41.659481    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:41.660004    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:33:43.958857    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:44.218114    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:44.245684    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:44.277948    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.277948    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:44.281784    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:44.308191    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.308236    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:44.311628    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:44.338002    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.338064    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:44.341334    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:44.369051    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.369051    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:44.373446    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:44.401355    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.401355    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:44.404625    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:44.435928    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.436021    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:44.438720    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:44.468518    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.468518    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:44.472419    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:44.505185    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.505185    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:44.505185    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:44.505185    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:44.542000    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:44.542000    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:44.637866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:44.628159   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.629590   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.630676   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.631884   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.633066   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:44.628159   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.629590   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.630676   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.631884   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.633066   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:44.637866    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:44.637866    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:44.668149    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:44.668149    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:44.722118    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:44.722118    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:47.287165    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:47.315701    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:47.348691    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.348691    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:47.352599    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:47.382757    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.382757    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:47.386956    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:47.416756    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.416756    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:47.420505    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:47.447567    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.447631    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:47.451327    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:47.481198    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.481198    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:47.484905    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:47.515752    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.515752    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:47.519521    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:47.549878    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.549878    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:47.553160    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:47.580738    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.580738    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:47.580738    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:47.580738    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:47.620996    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:47.620996    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:47.717751    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:47.708186   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.709887   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.710982   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.712203   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.713847   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:47.708186   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.709887   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.710982   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.712203   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.713847   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:47.717751    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:47.717751    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:47.747052    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:47.747052    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:47.806827    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:47.806907    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:50.374572    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:50.402608    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:50.434845    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.434845    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:50.439264    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:50.472884    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.472884    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:50.476675    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:50.506875    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.506875    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:50.510516    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:50.544104    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.544104    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:50.547823    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:50.582563    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.582563    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:50.586716    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:50.617520    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.617520    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:50.621651    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:50.654870    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.654924    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:50.658739    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:50.687650    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.687650    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:50.687650    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:50.687650    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:50.741903    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:50.741970    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:50.801979    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:50.801979    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:50.841061    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:50.841061    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:50.929313    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:50.919053   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.920402   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.921292   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.924075   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.925956   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:50.919053   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.920402   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.921292   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.924075   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.925956   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:50.929313    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:50.929313    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1210 07:33:53.996838    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:53.461932    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:53.489152    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:53.525676    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.525676    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:53.529484    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:53.564410    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.564438    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:53.567827    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:53.614175    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.614215    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:53.620260    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:53.655138    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.655138    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:53.659487    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:53.692591    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.692591    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:53.696809    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:53.736843    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.736843    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:53.741782    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:53.770910    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.770910    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:53.775145    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:53.805756    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.805756    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:53.805756    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:53.805756    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:53.868923    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:53.868923    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:53.909599    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:53.909599    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:53.994728    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:53.985324   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.986526   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.987499   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.989286   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.991204   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:53.985324   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.986526   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.987499   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.989286   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.991204   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:53.994728    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:53.994728    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:54.023183    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:54.023245    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:56.581055    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:56.606311    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:56.640781    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.640781    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:56.645032    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:56.673780    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.673780    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:56.680498    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:56.708843    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.708843    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:56.711839    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:56.743689    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.743689    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:56.747149    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:56.776428    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.776490    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:56.780173    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:56.810171    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.810171    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:56.815860    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:56.843104    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.843150    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:56.846843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:56.875180    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.875180    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:56.875180    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:56.875260    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:56.937905    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:56.937905    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:56.978984    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:56.978984    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:57.072981    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:57.057332   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.058272   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.063163   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.064098   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.066478   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:57.057332   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.058272   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.063163   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.064098   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.066478   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:57.072981    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:57.072981    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:57.103275    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:57.103275    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:59.657150    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:59.680473    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:59.717538    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.717538    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:59.721115    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:59.750445    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.750445    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:59.754192    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:59.783080    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.783609    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:59.786966    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:59.815381    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.815381    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:59.818634    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:59.846978    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.847073    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:59.850723    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:59.881504    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.881531    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:59.885538    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:59.912091    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.912091    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:59.915555    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:59.945836    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.945836    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:59.945836    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:59.945918    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:00.010932    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:00.010932    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:00.050450    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:00.050450    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:00.135132    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:00.122005   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.123035   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.124724   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.126083   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.127122   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:00.122005   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.123035   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.124724   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.126083   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.127122   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:00.135132    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:00.135132    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:00.162951    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:00.162951    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:02.722322    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:02.747735    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:02.782353    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.782423    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:02.785942    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:02.815562    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.815562    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:02.819580    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:02.851940    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.851940    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:02.855858    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:02.883743    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.883743    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:02.887230    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:02.919540    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.919540    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:02.923123    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:02.951385    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.951439    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:02.955922    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:02.985112    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.985172    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:02.988380    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:03.020559    1436 logs.go:282] 0 containers: []
	W1210 07:34:03.020590    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:03.020590    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:03.020643    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:03.113834    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:03.100874   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.101891   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.104988   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.106378   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.107577   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:03.100874   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.101891   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.104988   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.106378   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.107577   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:03.113834    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:03.113834    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:03.143434    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:03.143494    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:03.195505    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:03.195505    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:03.260582    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:03.260582    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:34:04.034666    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:34:05.805687    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:05.830820    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:05.867098    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.867098    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:05.870201    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:05.902724    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.902724    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:05.906452    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:05.937581    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.937660    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:05.941081    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:05.970812    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.970812    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:05.974826    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:06.005319    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.005319    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:06.009298    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:06.036331    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.036367    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:06.040396    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:06.070470    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.070522    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:06.073716    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:06.105829    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.105902    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:06.105902    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:06.105902    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:06.168761    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:06.168761    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:06.209503    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:06.209503    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:06.300233    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:06.287348   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.288220   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.291316   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.292659   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.293382   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:06.287348   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.288220   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.291316   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.292659   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.293382   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:06.300233    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:06.300233    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:06.325856    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:06.326404    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:34:12.432519    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1210 07:34:12.432519    6044 node_ready.go:38] duration metric: took 6m0.0003472s for node "no-preload-099700" to be "Ready" ...
	I1210 07:34:12.435520    6044 out.go:203] 
	W1210 07:34:12.437521    6044 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:34:12.437521    6044 out.go:285] * 
	W1210 07:34:12.439520    6044 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:34:12.443519    6044 out.go:203] 
	I1210 07:34:08.888339    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:08.915007    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:08.945370    1436 logs.go:282] 0 containers: []
	W1210 07:34:08.945370    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:08.948912    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:08.978717    1436 logs.go:282] 0 containers: []
	W1210 07:34:08.978744    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:08.982191    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:09.014137    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.014137    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:09.019817    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:09.049527    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.049527    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:09.053402    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:09.083494    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.083519    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:09.087029    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:09.115269    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.115306    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:09.117873    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:09.155291    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.155351    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:09.159388    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:09.189238    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.189238    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:09.189238    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:09.189238    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:09.276866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:09.264194   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.265301   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.267799   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269023   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269943   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:09.264194   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.265301   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.267799   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269023   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269943   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:09.276924    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:09.276924    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:09.303083    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:09.303603    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:09.350941    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:09.350941    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:09.414406    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:09.414406    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:11.970539    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:11.997446    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:12.029543    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.029543    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:12.033746    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:12.061992    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.061992    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:12.066520    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:12.095801    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.095801    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:12.099364    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:12.129880    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.129949    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:12.133782    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:12.162555    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.162555    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:12.167228    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:12.196229    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.196229    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:12.200137    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:12.226729    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.226729    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:12.230279    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:12.255730    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.255730    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:12.255730    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:12.255730    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:12.318642    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:12.318642    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:12.364065    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:12.364065    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:12.469524    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:12.459675   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.460966   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.462240   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.463300   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.464338   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:12.459675   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.460966   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.462240   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.463300   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.464338   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:12.469574    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:12.469574    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:12.496807    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:12.496950    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:15.052930    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:15.080623    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:15.117403    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.117403    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:15.120370    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:15.147363    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.148371    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:15.151363    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:15.180365    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.180365    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:15.183366    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:15.215366    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.215366    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:15.218364    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:15.247369    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.247369    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:15.251365    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:15.283373    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.283373    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:15.286369    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:15.314370    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.314370    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:15.317368    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:15.347380    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.347380    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:15.347380    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:15.347380    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:15.421369    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:15.421369    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:15.458368    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:15.458368    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:15.566221    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:15.551230   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.552488   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.553348   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.556086   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.557771   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:15.551230   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.552488   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.553348   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.556086   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.557771   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:15.566279    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:15.566338    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:15.605803    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:15.605803    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:18.163754    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:18.197669    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:18.254543    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.254543    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:18.260541    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:18.293062    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.293062    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:18.296833    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:18.327885    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.327968    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:18.331280    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:18.368942    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.368942    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:18.372299    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:18.400463    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.400463    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:18.405006    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:18.446334    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.446379    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:18.449958    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:18.478295    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.478381    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:18.482123    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:18.510432    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.510506    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:18.510548    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:18.510548    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:18.572862    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:18.572862    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:18.614127    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:18.614127    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:18.702730    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:18.692245   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.693386   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.694454   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.697285   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.699129   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:18.692245   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.693386   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.694454   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.697285   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.699129   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:18.702730    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:18.702730    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:18.729639    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:18.729639    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:21.289931    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:21.315099    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:21.349129    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.349129    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:21.352917    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:21.385897    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.386013    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:21.389207    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:21.439847    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.439847    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:21.444868    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:21.473011    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.473011    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:21.476938    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:21.503941    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.503983    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:21.507954    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:21.536377    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.536377    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:21.540123    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:21.571714    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.571714    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:21.575681    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:21.605581    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.605581    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:21.605581    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:21.605581    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:21.633565    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:21.633565    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:21.687271    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:21.687271    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:21.750102    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:21.750102    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:21.792165    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:21.792165    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:21.885403    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:21.874829   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876021   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876953   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.879461   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.880406   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:21.874829   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876021   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876953   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.879461   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.880406   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:24.393597    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:24.420363    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:24.450891    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.450891    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:24.454037    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:24.483407    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.483407    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:24.489862    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:24.517830    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.517830    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:24.521711    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:24.549403    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.549403    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:24.553551    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:24.580367    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.580367    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:24.584748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:24.612646    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.612646    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:24.616710    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:24.647684    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.647753    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:24.651184    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:24.679053    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.679053    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:24.679053    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:24.679053    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:24.768115    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:24.758247   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.759411   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.760423   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.761390   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.762221   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:24.758247   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.759411   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.760423   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.761390   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.762221   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:24.768115    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:24.768115    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:24.795167    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:24.795201    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:24.844459    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:24.844459    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:24.907171    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:24.907171    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:27.453205    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:27.478026    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:27.513249    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.513249    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:27.517125    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:27.547733    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.547733    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:27.551680    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:27.577736    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.577736    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:27.581469    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:27.612483    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.612483    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:27.616434    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:27.644895    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.644895    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:27.650606    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:27.678273    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.678273    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:27.681744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:27.708604    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.708604    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:27.712244    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:27.742726    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.742726    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:27.742726    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:27.742726    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:27.807570    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:27.807570    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:27.846722    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:27.846722    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:27.929641    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:27.919463   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.920475   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.921726   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.922614   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.924717   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:27.919463   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.920475   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.921726   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.922614   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.924717   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:27.929641    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:27.929641    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:27.956087    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:27.956087    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:30.506646    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:30.530148    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:30.563444    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.563444    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:30.567219    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:30.596843    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.596843    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:30.600803    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:30.628947    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.628947    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:30.632665    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:30.663325    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.663369    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:30.667341    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:30.695640    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.695640    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:30.699545    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:30.728310    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.728310    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:30.731899    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:30.758598    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.758598    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:30.763285    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:30.792051    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.792051    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:30.792051    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:30.792051    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:30.830219    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:30.830219    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:30.919635    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:30.909299   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.910353   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.912393   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.914543   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.915506   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:30.909299   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.910353   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.912393   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.914543   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.915506   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:30.919635    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:30.919635    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:30.949360    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:30.949360    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:30.997435    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:30.997435    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:33.565782    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:33.590543    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:33.623936    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.623936    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:33.629607    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:33.664589    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.664673    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:33.668215    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:33.698892    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.698892    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:33.702344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:33.733428    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.733428    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:33.737226    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:33.764873    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.764873    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:33.768422    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:33.800350    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.800350    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:33.804811    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:33.836711    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.836711    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:33.840164    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:33.869248    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.869333    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:33.869333    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:33.869333    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:33.932626    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:33.933627    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:33.974227    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:33.974227    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:34.066031    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:34.054849   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.056230   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.057835   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.058730   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.060848   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:34.054849   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.056230   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.057835   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.058730   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.060848   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:34.066031    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:34.066031    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:34.092765    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:34.092765    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:36.652871    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:36.677531    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:36.712608    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.712608    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:36.718832    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:36.748298    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.748298    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:36.751762    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:36.783390    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.783403    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:36.787051    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:36.815730    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.815766    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:36.819100    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:36.848875    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.848875    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:36.852925    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:36.886657    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.886657    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:36.890808    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:36.920858    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.920858    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:36.924583    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:36.955882    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.955960    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:36.956001    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:36.956001    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:37.021848    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:37.021848    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:37.060744    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:37.060744    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:37.154895    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:37.142691   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.143928   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.145557   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.147806   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.149984   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:37.142691   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.143928   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.145557   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.147806   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.149984   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:37.154895    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:37.154895    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:37.182385    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:37.182385    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:39.737032    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:39.762115    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:39.792900    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.792900    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:39.797014    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:39.825423    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.825455    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:39.829352    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:39.856679    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.856679    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:39.860615    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:39.891351    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.891351    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:39.895346    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:39.924351    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.924351    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:39.928531    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:39.956447    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.956447    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:39.961810    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:39.987792    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.987792    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:39.991127    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:40.018614    1436 logs.go:282] 0 containers: []
	W1210 07:34:40.018614    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:40.018614    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:40.018614    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:40.082378    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:40.082378    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:40.123506    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:40.123506    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:40.208266    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:40.199944   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201027   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201868   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.204245   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.205189   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:40.199944   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201027   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201868   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.204245   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.205189   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:40.209272    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:40.209272    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:40.239017    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:40.239017    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:42.793527    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:42.818084    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:42.852095    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.852095    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:42.855685    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:42.883269    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.883269    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:42.887287    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:42.918719    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.918800    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:42.923828    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:42.950663    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.950663    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:42.956319    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:42.985991    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.985991    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:42.989729    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:43.017767    1436 logs.go:282] 0 containers: []
	W1210 07:34:43.017824    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:43.021689    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:43.048180    1436 logs.go:282] 0 containers: []
	W1210 07:34:43.048180    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:43.052257    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:43.081092    1436 logs.go:282] 0 containers: []
	W1210 07:34:43.081160    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:43.081183    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:43.081217    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:43.174944    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:43.162932   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.166268   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.169191   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.170321   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.171500   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:43.162932   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.166268   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.169191   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.170321   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.171500   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:43.174992    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:43.174992    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:43.202288    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:43.202807    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:43.249217    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:43.249217    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:43.311267    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:43.311267    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:45.857003    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:45.881743    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:45.911856    1436 logs.go:282] 0 containers: []
	W1210 07:34:45.911856    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:45.915335    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:45.945613    1436 logs.go:282] 0 containers: []
	W1210 07:34:45.945613    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:45.949134    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:45.977768    1436 logs.go:282] 0 containers: []
	W1210 07:34:45.977768    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:45.982182    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:46.010859    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.010859    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:46.014603    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:46.043489    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.043531    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:46.047198    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:46.080651    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.080685    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:46.084319    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:46.116705    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.116780    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:46.121508    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:46.154299    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.154299    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:46.154299    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:46.154299    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:46.222546    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:46.222546    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:46.262468    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:46.262468    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:46.349894    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:46.340418   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.341659   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.342932   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.344391   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.345361   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:46.340418   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.341659   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.342932   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.344391   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.345361   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:46.349894    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:46.349894    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:46.376804    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:46.376804    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:48.931982    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:48.957769    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:48.990182    1436 logs.go:282] 0 containers: []
	W1210 07:34:48.990182    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:48.994255    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:49.021913    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.021913    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:49.026344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:49.054704    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.054704    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:49.058471    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:49.089507    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.089559    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:49.093804    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:49.121462    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.121462    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:49.125755    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:49.156174    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.156174    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:49.160707    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:49.190933    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.190933    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:49.194771    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:49.220610    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.220610    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:49.220610    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:49.220610    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:49.283897    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:49.283897    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:49.324154    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:49.324154    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:49.412165    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:49.404459   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.405604   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.407007   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.408149   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.409161   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:49.404459   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.405604   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.407007   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.408149   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.409161   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:49.412165    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:49.413146    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:49.440045    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:49.440045    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:52.013495    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:52.044149    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:52.080205    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.080205    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:52.084762    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:52.115105    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.115105    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:52.119720    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:52.149672    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.149672    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:52.153985    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:52.186711    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.186711    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:52.192181    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:52.217751    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.217751    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:52.221590    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:52.250827    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.250876    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:52.254668    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:52.284643    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.284643    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:52.288811    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:52.316628    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.316707    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:52.316707    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:52.316707    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:52.348325    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:52.348325    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:52.408110    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:52.408110    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:52.471268    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:52.471268    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:52.511512    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:52.511512    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:52.594976    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:52.587009   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.588398   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.589811   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.591970   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.593048   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:52.587009   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.588398   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.589811   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.591970   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.593048   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:55.100294    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:55.126530    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:55.160945    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.160945    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:55.164755    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:55.196407    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.196407    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:55.199994    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:55.229174    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.229174    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:55.232898    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:55.265856    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.265856    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:55.268892    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:55.302098    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.302121    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:55.305590    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:55.335754    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.335754    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:55.339583    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:55.368170    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.368251    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:55.372008    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:55.397576    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.397576    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:55.397576    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:55.397576    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:55.434345    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:55.434345    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:55.528958    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:55.516781   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.517755   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.519593   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.520640   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.521612   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:55.516781   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.517755   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.519593   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.520640   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.521612   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:55.528958    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:55.528958    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:55.555805    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:55.555805    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:55.602232    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:55.602232    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:58.169858    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:58.195497    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:58.226557    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.226588    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:58.229677    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:58.260817    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.260817    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:58.265378    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:58.293848    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.293920    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:58.297406    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:58.326737    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.326737    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:58.330307    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:58.357319    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.357407    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:58.360727    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:58.392361    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.392405    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:58.395697    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:58.425728    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.425807    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:58.429369    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:58.457816    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.457866    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:58.457866    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:58.457866    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:58.495777    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:58.495777    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:58.585489    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:58.573271   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.574154   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.576361   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.577165   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.579860   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:58.573271   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.574154   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.576361   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.577165   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.579860   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:58.585489    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:58.585489    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:58.613007    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:58.613007    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:58.661382    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:58.661382    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:01.230900    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:01.255356    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:01.292137    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.292190    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:01.297192    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:01.328372    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.328372    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:01.332239    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:01.360635    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.360635    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:01.364529    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:01.391175    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.391175    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:01.394754    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:01.423093    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.423093    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:01.427022    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:01.454965    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.454965    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:01.459137    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:01.487734    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.487734    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:01.492051    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:01.518150    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.518150    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:01.518150    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:01.518150    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:01.580940    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:01.580940    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:01.620363    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:01.620363    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:01.710696    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:01.700163   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.701113   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.703089   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.704462   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.705476   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:01.700163   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.701113   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.703089   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.704462   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.705476   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:01.710696    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:01.710696    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:01.736867    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:01.736867    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:04.295439    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:04.322348    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:04.356895    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.356919    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:04.361858    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:04.396943    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.397019    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:04.401065    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:04.431929    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.431929    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:04.436798    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:04.468073    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.468073    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:04.472528    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:04.503230    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.503230    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:04.506632    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:04.540016    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.540016    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:04.543627    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:04.576446    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.576446    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:04.583292    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:04.611475    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.611542    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:04.611542    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:04.611542    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:04.640376    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:04.640433    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:04.695309    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:04.695309    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:04.756418    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:04.756418    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:04.795089    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:04.795089    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:04.891481    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:04.878108   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.880090   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.883096   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.885167   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.886541   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:04.878108   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.880090   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.883096   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.885167   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.886541   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:07.396688    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:07.422837    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:07.454807    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.454807    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:07.459071    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:07.489720    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.489720    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:07.493466    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:07.519982    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.519982    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:07.523858    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:07.552985    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.552985    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:07.556972    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:07.589709    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.589709    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:07.593709    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:07.621519    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.621519    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:07.625151    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:07.654324    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.654404    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:07.657279    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:07.690913    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.690966    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:07.690988    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:07.690988    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:07.757157    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:07.757157    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:07.796333    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:07.796333    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:07.893954    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:07.881331   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.882766   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.885657   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887077   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887623   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:07.881331   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.882766   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.885657   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887077   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887623   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:07.893954    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:07.893954    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:07.943452    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:07.943452    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:10.496562    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:10.522517    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:10.555517    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.555517    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:10.560160    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:10.591257    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.591306    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:10.594925    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:10.623075    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.623075    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:10.626725    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:10.654115    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.654115    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:10.658014    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:10.689683    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.689683    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:10.693386    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:10.721754    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.721754    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:10.725087    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:10.753052    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.753052    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:10.756926    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:10.787466    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.787466    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:10.787466    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:10.787466    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:10.882563    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:10.873740   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.874902   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.876114   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.877091   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.878349   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:10.873740   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.874902   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.876114   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.877091   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.878349   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:10.882563    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:10.882563    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:10.944299    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:10.944299    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:10.993835    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:10.993835    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:11.053114    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:11.053114    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:13.597304    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:13.621417    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:13.653723    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.653842    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:13.657020    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:13.690175    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.690175    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:13.693954    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:13.723350    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.723350    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:13.728514    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:13.757179    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.757179    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:13.765645    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:13.794387    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.794473    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:13.798130    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:13.826937    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.826937    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:13.830895    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:13.865171    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.865171    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:13.869540    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:13.899920    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.899920    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:13.899920    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:13.899920    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:13.964338    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:13.964338    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:14.028584    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:14.028584    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:14.067840    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:14.067840    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:14.154123    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:14.144490   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.145615   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.146725   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.148037   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.149069   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:14.144490   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.145615   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.146725   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.148037   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.149069   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:14.154123    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:14.154123    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:16.685726    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:16.716822    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:16.753764    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.753827    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:16.757211    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:16.789634    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.789634    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:16.793640    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:16.822677    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.822728    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:16.826522    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:16.853660    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.853660    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:16.858461    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:16.887452    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.887504    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:16.893014    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:16.939344    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.939344    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:16.943118    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:16.971703    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.971781    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:16.974884    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:17.003517    1436 logs.go:282] 0 containers: []
	W1210 07:35:17.003595    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:17.003595    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:17.003595    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:17.088355    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:17.079526   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.080729   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.081812   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.083165   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.084419   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:17.079526   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.080729   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.081812   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.083165   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.084419   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:17.088355    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:17.088355    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:17.117181    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:17.117241    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:17.168070    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:17.168155    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:17.231584    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:17.231584    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:19.776112    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:19.801640    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:19.835886    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.835886    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:19.839626    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:19.872127    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.872127    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:19.876526    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:19.929339    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.929339    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:19.933522    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:19.962400    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.962400    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:19.966133    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:19.994468    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.994544    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:19.998645    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:20.027252    1436 logs.go:282] 0 containers: []
	W1210 07:35:20.027252    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:20.032575    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:20.060153    1436 logs.go:282] 0 containers: []
	W1210 07:35:20.060153    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:20.065171    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:20.091891    1436 logs.go:282] 0 containers: []
	W1210 07:35:20.091891    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:20.091891    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:20.091891    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:20.131103    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:20.131103    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:20.218614    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:20.208033   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.209212   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.210215   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214139   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214965   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:20.208033   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.209212   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.210215   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214139   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214965   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:20.218614    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:20.219146    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:20.245788    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:20.245788    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:20.298111    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:20.298207    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:22.861878    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:22.887649    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:22.922573    1436 logs.go:282] 0 containers: []
	W1210 07:35:22.922573    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:22.926179    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:22.959170    1436 logs.go:282] 0 containers: []
	W1210 07:35:22.959197    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:22.963338    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:22.994510    1436 logs.go:282] 0 containers: []
	W1210 07:35:22.994566    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:22.997861    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:23.029960    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.030036    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:23.033513    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:23.064625    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.064625    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:23.069769    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:23.101906    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.101943    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:23.105651    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:23.136615    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.136615    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:23.140616    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:23.170857    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.170942    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:23.170942    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:23.170942    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:23.233098    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:23.233098    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:23.273238    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:23.273238    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:23.361638    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:23.352696   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.354050   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.356707   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.357782   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.358807   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:23.352696   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.354050   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.356707   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.357782   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.358807   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:23.361638    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:23.361638    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:23.390711    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:23.391230    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:25.949809    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:25.975470    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:26.007496    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.007496    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:26.011469    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:26.044617    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.044617    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:26.048311    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:26.078756    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.078783    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:26.082359    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:26.112113    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.112183    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:26.115713    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:26.148097    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.148097    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:26.151926    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:26.182729    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.182753    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:26.186743    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:26.217219    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.217219    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:26.223773    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:26.251643    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.251713    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:26.251713    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:26.251713    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:26.278698    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:26.278698    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:26.332014    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:26.332014    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:26.394304    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:26.394304    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:26.433073    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:26.433073    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:26.519395    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:26.506069   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.507354   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.509591   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.512516   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.514125   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:26.506069   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.507354   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.509591   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.512516   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.514125   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:29.024398    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:29.049372    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:29.084989    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.085019    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:29.089078    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:29.116420    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.116420    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:29.120531    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:29.149880    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.149880    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:29.153505    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:29.181726    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.181790    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:29.185295    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:29.216713    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.216713    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:29.222568    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:29.249487    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.249487    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:29.253512    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:29.283473    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.283497    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:29.287061    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:29.313225    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.313225    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:29.313225    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:29.313225    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:29.399665    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:29.386954   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.388181   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.390621   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.391811   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.393167   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:29.386954   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.388181   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.390621   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.391811   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.393167   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:29.399665    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:29.399665    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:29.428593    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:29.428593    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:29.477815    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:29.477877    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:29.541874    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:29.541874    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:32.087876    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:32.113456    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:32.145773    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.145805    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:32.149787    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:32.178912    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.178987    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:32.182700    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:32.213301    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.213301    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:32.217129    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:32.246756    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.246824    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:32.250299    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:32.278791    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.278835    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:32.282397    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:32.316208    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.316278    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:32.320233    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:32.349155    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.349155    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:32.352807    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:32.386875    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.386875    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:32.386944    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:32.386944    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:32.479781    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:32.469750   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.470693   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.473307   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.474321   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.475302   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:32.469750   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.470693   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.473307   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.474321   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.475302   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:32.479781    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:32.479781    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:32.506994    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:32.506994    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:32.561757    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:32.561757    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:32.624545    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:32.624545    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:35.176040    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:35.201056    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:35.235735    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.235735    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:35.239655    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:35.267349    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.267416    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:35.270515    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:35.303264    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.303264    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:35.306371    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:35.339037    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.339263    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:35.343297    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:35.375639    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.375639    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:35.379647    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:35.407670    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.407670    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:35.411506    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:35.446240    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.446240    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:35.450265    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:35.477814    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.477814    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:35.477814    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:35.477814    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:35.541174    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:35.541174    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:35.581633    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:35.581633    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:35.673254    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:35.664165   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.665345   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.666190   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.668510   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.669467   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:35.664165   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.665345   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.666190   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.668510   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.669467   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:35.673254    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:35.673254    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:35.701200    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:35.701200    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:38.255869    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:38.281759    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:38.316123    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.316123    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:38.319358    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:38.348903    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.348943    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:38.352900    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:38.381759    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.381795    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:38.385361    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:38.414524    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.414586    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:38.417710    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:38.447131    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.447205    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:38.451100    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:38.479508    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.479543    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:38.483003    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:38.512848    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.512848    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:38.516967    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:38.547680    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.547680    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:38.547680    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:38.547680    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:38.614038    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:38.614038    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:38.658448    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:38.658448    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:38.743054    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:38.733038   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.734073   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.735791   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.738099   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.739595   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:38.733038   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.734073   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.735791   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.738099   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.739595   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:38.743054    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:38.743054    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:38.775152    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:38.775214    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:41.333835    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:41.358081    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:41.393471    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.393471    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:41.396774    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:41.425173    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.425224    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:41.428523    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:41.456663    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.456663    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:41.459654    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:41.490212    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.490212    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:41.493250    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:41.523505    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.523505    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:41.527006    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:41.555529    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.555529    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:41.559605    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:41.590913    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.591011    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:41.596392    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:41.627361    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.627421    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:41.627441    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:41.627538    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:41.692948    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:41.692948    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:41.731909    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:41.731909    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:41.816121    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:41.806508   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.807705   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.808985   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.810306   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.811462   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:41.806508   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.807705   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.808985   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.810306   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.811462   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:41.816121    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:41.816121    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:41.844622    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:41.844622    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:44.401865    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:44.426294    1436 out.go:203] 
	W1210 07:35:44.428631    1436 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1210 07:35:44.428631    1436 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1210 07:35:44.428631    1436 out.go:285] * Related issues:
	W1210 07:35:44.428631    1436 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1210 07:35:44.428631    1436 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1210 07:35:44.430629    1436 out.go:203] 
	
	
	==> Docker <==
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216617054Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216699662Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216710563Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216717064Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216722865Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216746967Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216779770Z" level=info msg="Initializing buildkit"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.379150718Z" level=info msg="Completed buildkit initialization"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.395276092Z" level=info msg="Daemon has completed initialization"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.395426306Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.395462310Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.395512215Z" level=info msg="API listen on [::]:2376"
	Dec 10 07:29:38 newest-cni-525200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 07:29:39 newest-cni-525200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Loaded network plugin cni"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 07:29:39 newest-cni-525200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:48.250195   19624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:48.251384   19624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:48.252563   19624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:48.253915   19624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:48.255495   19624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +7.347496] CPU: 6 PID: 490841 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fe73ddc4b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7fe73ddc4af6.
	[  +0.000000] RSP: 002b:00007ffc57a05a90 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000000] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.867258] CPU: 5 PID: 491006 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f1a7acb4b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f1a7acb4af6.
	[  +0.000001] RSP: 002b:00007ffe19029200 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 07:32] tmpfs: Unknown parameter 'noswap'
	[ +15.541609] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 07:35:48 up  3:04,  0 user,  load average: 2.12, 3.59, 4.34
	Linux newest-cni-525200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:35:45 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:45 newest-cni-525200 kubelet[19456]: E1210 07:35:45.407829   19456 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:35:45 newest-cni-525200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:35:45 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:35:46 newest-cni-525200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 10 07:35:46 newest-cni-525200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:46 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:46 newest-cni-525200 kubelet[19470]: E1210 07:35:46.191182   19470 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:35:46 newest-cni-525200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:35:46 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:35:46 newest-cni-525200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 10 07:35:46 newest-cni-525200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:46 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:46 newest-cni-525200 kubelet[19496]: E1210 07:35:46.899000   19496 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:35:46 newest-cni-525200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:35:46 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:35:47 newest-cni-525200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 488.
	Dec 10 07:35:47 newest-cni-525200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:47 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:47 newest-cni-525200 kubelet[19513]: E1210 07:35:47.656982   19513 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:35:47 newest-cni-525200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:35:47 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:35:48 newest-cni-525200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 489.
	Dec 10 07:35:48 newest-cni-525200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:48 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-525200 -n newest-cni-525200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-525200 -n newest-cni-525200: exit status 2 (617.0006ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-525200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (382.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1210 07:34:18.964804   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:34:45.978679   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:34:50.393465   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:34:51.474039   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:34:58.161157   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-144100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:35:13.759533   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:35:18.103860   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:35:30.288443   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:35:30.295523   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:35:30.307387   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:35:30.329587   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:35:30.372042   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:35:30.453874   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:35:30.615404   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:35:30.937528   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:35:31.579008   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:35:32.861717   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:35:35.424227   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:35:40.546704   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:36:07.364126   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:36:09.072708   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:36:11.271504   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:36:11.458291   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:36:13.397819   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:36:17.606749   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:36:26.811951   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:36:38.088976   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:36:52.234134   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:37:19.051359   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:37:28.665466   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:37:29.895679   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:37:34.538249   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:37:51.802088   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:37:51.808880   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:37:51.820926   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:37:51.842974   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:37:51.884775   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:37:51.966908   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:37:52.128796   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:37:52.450847   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:37:53.092285   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:37:54.374405   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:37:56.935897   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:37:57.603599   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:38:02.058588   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:38:02.351742   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:38:10.205303   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:38:12.301169   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:38:14.156810   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:38:29.530240   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:38:32.784478   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:38:40.974960   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:38:57.243276   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:39:13.747092   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:39:18.969897   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:39:45.983683   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:39:50.398619   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:39:58.166113   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-144100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:40:30.293317   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:40:35.670497   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:40:57.086510   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:40:58.001315   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:40:59.098337   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:41:11.462286   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:41:21.240470   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-144100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:41:24.819942   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:42:28.669964   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:42:29.900590   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:42:51.807316   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:43:02.356080   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:43:10.209375   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-099700 -n no-preload-099700
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-099700 -n no-preload-099700: exit status 2 (596.1957ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-099700" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-099700
helpers_test.go:244: (dbg) docker inspect no-preload-099700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11",
	        "Created": "2025-12-10T07:17:13.908925425Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 451860,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:27:59.880122532Z",
	            "FinishedAt": "2025-12-10T07:27:56.24098096Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/hostname",
	        "HostsPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/hosts",
	        "LogPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11-json.log",
	        "Name": "/no-preload-099700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-099700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-099700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-099700",
	                "Source": "/var/lib/docker/volumes/no-preload-099700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-099700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-099700",
	                "name.minikube.sigs.k8s.io": "no-preload-099700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36b12f7c82c546811ea16d124f8782cdd27350c19ac1d3ab3f547c6a6d9a2eab",
	            "SandboxKey": "/var/run/docker/netns/36b12f7c82c5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57436"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57439"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57440"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-099700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "19fb5b7ebc44993ca33ebb33ab9b189e482cb385e465c509a613326e2c10eb7e",
	                    "EndpointID": "5663a1495caac3a8be49ce34bbbb4f5a9e88b108cb75e92d2208550cc897ee2e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-099700",
	                        "a93123bad589"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-099700 -n no-preload-099700
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-099700 -n no-preload-099700: exit status 2 (603.9693ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-099700 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-099700 logs -n 25: (1.4535355s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                          ARGS                                          │        PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-648600 sudo cat /var/lib/kubelet/config.yaml                         │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status docker --all --full --no-pager          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat docker --no-pager                          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/docker/daemon.json                              │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo docker system info                                       │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status cri-docker --all --full --no-pager      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat cri-docker --no-pager                      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /usr/lib/systemd/system/cri-docker.service           │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cri-dockerd --version                                    │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status containerd --all --full --no-pager      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat containerd --no-pager                      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /lib/systemd/system/containerd.service               │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/containerd/config.toml                          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo containerd config dump                                   │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status crio --all --full --no-pager            │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat crio --no-pager                            │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo crio config                                              │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ delete  │ -p custom-flannel-648600                                                               │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ image   │ newest-cni-525200 image list --format=json                                             │ newest-cni-525200     │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ pause   │ -p newest-cni-525200 --alsologtostderr -v=1                                            │ newest-cni-525200     │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ unpause │ -p newest-cni-525200 --alsologtostderr -v=1                                            │ newest-cni-525200     │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ delete  │ -p newest-cni-525200                                                                   │ newest-cni-525200     │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:36 UTC │ 10 Dec 25 07:36 UTC │
	│ delete  │ -p newest-cni-525200                                                                   │ newest-cni-525200     │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:36 UTC │ 10 Dec 25 07:36 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:31:27
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:31:27.429465    2240 out.go:360] Setting OutFile to fd 1904 ...
	I1210 07:31:27.483636    2240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:31:27.483636    2240 out.go:374] Setting ErrFile to fd 1148...
	I1210 07:31:27.483636    2240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:31:27.498633    2240 out.go:368] Setting JSON to false
	I1210 07:31:27.500624    2240 start.go:133] hostinfo: {"hostname":"minikube4","uptime":10819,"bootTime":1765341068,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 07:31:27.500624    2240 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 07:31:27.505874    2240 out.go:179] * [custom-flannel-648600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 07:31:27.510785    2240 notify.go:221] Checking for updates...
	I1210 07:31:27.513604    2240 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:31:27.516776    2240 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:31:27.521423    2240 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 07:31:27.524646    2240 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:31:27.526628    2240 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1210 07:31:23.340249    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:31:27.530138    2240 config.go:182] Loaded profile config "false-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:31:27.530637    2240 config.go:182] Loaded profile config "newest-cni-525200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:31:27.530927    2240 config.go:182] Loaded profile config "no-preload-099700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:31:27.531072    2240 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:31:27.674116    2240 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 07:31:27.679999    2240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:31:27.935225    2240 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:31:27.906881904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:31:27.940210    2240 out.go:179] * Using the docker driver based on user configuration
	I1210 07:31:27.947210    2240 start.go:309] selected driver: docker
	I1210 07:31:27.947210    2240 start.go:927] validating driver "docker" against <nil>
	I1210 07:31:27.947210    2240 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:31:28.038927    2240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:31:28.306393    2240 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:31:28.276193336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:31:28.307456    2240 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:31:28.308474    2240 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:31:28.311999    2240 out.go:179] * Using Docker Desktop driver with root privileges
	I1210 07:31:28.314563    2240 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1210 07:31:28.314921    2240 start_flags.go:336] Found "testdata\\kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1210 07:31:28.314921    2240 start.go:353] cluster config:
	{Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:31:28.317704    2240 out.go:179] * Starting "custom-flannel-648600" primary control-plane node in "custom-flannel-648600" cluster
	I1210 07:31:28.318967    2240 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 07:31:28.320981    2240 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:31:23.421229    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:23.421229    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:23.460218    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:23.460218    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:23.544413    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:23.535582    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.536730    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.537358    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.539749    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.540912    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:23.535582    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.536730    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.537358    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.539749    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.540912    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:26.050161    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:26.077105    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:26.111827    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.111827    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:26.116713    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:26.160114    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.160114    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:26.163744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:26.201139    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.201139    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:26.204831    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:26.240411    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.240462    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:26.244533    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:26.280463    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.280463    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:26.285443    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:26.317450    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.317450    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:26.320454    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:26.356058    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.356058    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:26.360642    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:26.406955    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.406994    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:26.407032    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:26.407032    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:26.486801    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:26.486845    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:26.525844    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:26.525844    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:26.629730    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:26.619679    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.620896    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.621633    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623054    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623677    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:26.619679    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.620896    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.621633    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623054    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623677    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:26.630733    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:26.630733    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:26.786973    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:26.786973    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:28.323967    2240 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:31:28.323967    2240 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 07:31:28.370604    2240 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:31:28.410253    2240 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:31:28.410253    2240 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:31:28.586590    2240 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:31:28.586590    2240 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\config.json ...
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:31:28.586590    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\config.json: {Name:mk37135597d0b3e0094e1cb1b5ff50d942db06b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:31:28.587928    2240 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:31:28.587928    2240 start.go:360] acquireMachinesLock for custom-flannel-648600: {Name:mk4a3a34c58cff29c46217d57a91ed79fc9f522b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:28.588459    2240 start.go:364] duration metric: took 531.3µs to acquireMachinesLock for "custom-flannel-648600"
	I1210 07:31:28.588615    2240 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false D
isableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:31:28.588742    2240 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:31:28.592548    2240 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:31:28.593172    2240 start.go:159] libmachine.API.Create for "custom-flannel-648600" (driver="docker")
	I1210 07:31:28.593172    2240 client.go:173] LocalClient.Create starting
	I1210 07:31:28.593172    2240 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1210 07:31:28.594367    2240 main.go:143] libmachine: Decoding PEM data...
	I1210 07:31:28.594367    2240 main.go:143] libmachine: Parsing certificate...
	I1210 07:31:28.594367    2240 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1210 07:31:28.595268    2240 main.go:143] libmachine: Decoding PEM data...
	I1210 07:31:28.595268    2240 main.go:143] libmachine: Parsing certificate...
	I1210 07:31:28.601656    2240 cli_runner.go:164] Run: docker network inspect custom-flannel-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:31:28.702719    2240 cli_runner.go:211] docker network inspect custom-flannel-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:31:28.710721    2240 network_create.go:284] running [docker network inspect custom-flannel-648600] to gather additional debugging logs...
	I1210 07:31:28.710721    2240 cli_runner.go:164] Run: docker network inspect custom-flannel-648600
	W1210 07:31:28.938963    2240 cli_runner.go:211] docker network inspect custom-flannel-648600 returned with exit code 1
	I1210 07:31:28.938963    2240 network_create.go:287] error running [docker network inspect custom-flannel-648600]: docker network inspect custom-flannel-648600: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-648600 not found
	I1210 07:31:28.938963    2240 network_create.go:289] output of [docker network inspect custom-flannel-648600]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-648600 not found
	
	** /stderr **
	I1210 07:31:28.945949    2240 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:31:29.091971    2240 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:29.381586    2240 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:29.465291    2240 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016a8ae0}
	I1210 07:31:29.465291    2240 network_create.go:124] attempt to create docker network custom-flannel-648600 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1210 07:31:29.470056    2240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600
	W1210 07:31:30.046347    2240 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600 returned with exit code 1
	W1210 07:31:30.046347    2240 network_create.go:149] failed to create docker network custom-flannel-648600 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1210 07:31:30.046347    2240 network_create.go:116] failed to create docker network custom-flannel-648600 192.168.67.0/24, will retry: subnet is taken
	I1210 07:31:30.140283    2240 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:30.262644    2240 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017e1d40}
	I1210 07:31:30.262866    2240 network_create.go:124] attempt to create docker network custom-flannel-648600 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 07:31:30.267646    2240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600
	W1210 07:31:30.581811    2240 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600 returned with exit code 1
	W1210 07:31:30.581811    2240 network_create.go:149] failed to create docker network custom-flannel-648600 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1210 07:31:30.581811    2240 network_create.go:116] failed to create docker network custom-flannel-648600 192.168.76.0/24, will retry: subnet is taken
	I1210 07:31:30.621040    2240 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:30.648052    2240 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cde450}
	I1210 07:31:30.648052    2240 network_create.go:124] attempt to create docker network custom-flannel-648600 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1210 07:31:30.656045    2240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600
	I1210 07:31:30.870907    2240 network_create.go:108] docker network custom-flannel-648600 192.168.85.0/24 created
	I1210 07:31:30.870907    2240 kic.go:121] calculated static IP "192.168.85.2" for the "custom-flannel-648600" container
	I1210 07:31:30.881906    2240 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:31:31.006456    2240 cli_runner.go:164] Run: docker volume create custom-flannel-648600 --label name.minikube.sigs.k8s.io=custom-flannel-648600 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:31:31.098467    2240 oci.go:103] Successfully created a docker volume custom-flannel-648600
	I1210 07:31:31.104469    2240 cli_runner.go:164] Run: docker run --rm --name custom-flannel-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-648600 --entrypoint /usr/bin/test -v custom-flannel-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 07:31:31.792496    2240 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.792496    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1210 07:31:31.792496    2240 cache.go:107] acquiring lock: {Name:mk712909f8cebe825fb8a435a44c90c8d2ca7c32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.792496    2240 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.2058554s
	I1210 07:31:31.792496    2240 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1210 07:31:31.792496    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 exists
	I1210 07:31:31.792496    2240 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.34.3" took 3.2053301s
	I1210 07:31:31.792496    2240 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 succeeded
	I1210 07:31:31.794500    2240 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.794500    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1210 07:31:31.794500    2240 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.2078599s
	I1210 07:31:31.795487    2240 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1210 07:31:31.796493    2240 cache.go:107] acquiring lock: {Name:mk4347e5873954a8abe84535c3dbf0018de433f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.796493    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 exists
	I1210 07:31:31.796493    2240 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.34.3" took 3.2098526s
	I1210 07:31:31.796493    2240 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 succeeded
	I1210 07:31:31.809204    2240 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.809204    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1210 07:31:31.809204    2240 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.2225634s
	I1210 07:31:31.809728    2240 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1210 07:31:31.821783    2240 cache.go:107] acquiring lock: {Name:mk2c17fa70a505b9214cef4e32cab64efc0821c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.822582    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 exists
	I1210 07:31:31.822582    2240 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.34.3" took 3.2354164s
	I1210 07:31:31.822582    2240 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 succeeded
	I1210 07:31:31.828690    2240 cache.go:107] acquiring lock: {Name:mk6b392ee3857c8c549222e5d4bc0d459f0d6374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.828690    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 exists
	I1210 07:31:31.828690    2240 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.12.1" took 3.2420491s
	I1210 07:31:31.828690    2240 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 succeeded
	I1210 07:31:31.868175    2240 cache.go:107] acquiring lock: {Name:mkc3ba12d2dc6e754bbb22b72e061ebdaf85fee6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.869189    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 exists
	I1210 07:31:31.869189    2240 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.34.3" took 3.2820228s
	I1210 07:31:31.869189    2240 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 succeeded
	I1210 07:31:31.869189    2240 cache.go:87] Successfully saved all images to host disk.
	I1210 07:31:29.397246    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:29.477876    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:29.605797    1436 logs.go:282] 0 containers: []
E1210 07:43:19.515157   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
	W1210 07:31:29.605797    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:29.612110    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:29.728807    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.728807    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:29.734404    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:29.836328    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.836328    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:29.841346    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:29.932721    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.933712    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:29.938725    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:30.029301    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.029301    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:30.034503    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:30.132157    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.132157    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:30.137284    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:30.276443    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.276443    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:30.284280    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:30.440215    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.440215    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:30.440215    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:30.440215    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:30.586863    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:30.586863    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:30.654056    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:30.654056    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:30.825025    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:30.810195    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.812029    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.813542    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.815272    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.818214    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:30.810195    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.812029    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.813542    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.815272    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.818214    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:30.825083    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:30.825083    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:30.883913    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:30.883913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:32.772569    2240 cli_runner.go:217] Completed: docker run --rm --name custom-flannel-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-648600 --entrypoint /usr/bin/test -v custom-flannel-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.6680738s)
	I1210 07:31:32.772569    2240 oci.go:107] Successfully prepared a docker volume custom-flannel-648600
	I1210 07:31:32.772569    2240 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:31:32.777565    2240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:31:33.023291    2240 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:31:33.001747684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:31:33.027286    2240 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:31:33.264619    2240 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-648600 --name custom-flannel-648600 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-648600 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-648600 --network custom-flannel-648600 --ip 192.168.85.2 --volume custom-flannel-648600:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 07:31:34.003194    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Running}}
	I1210 07:31:34.069196    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:31:34.137196    2240 cli_runner.go:164] Run: docker exec custom-flannel-648600 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:31:34.255530    2240 oci.go:144] the created container "custom-flannel-648600" has a running status.
	I1210 07:31:34.255530    2240 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa...
	I1210 07:31:34.371827    2240 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:31:34.454671    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:31:34.514682    2240 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:31:34.514682    2240 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-648600 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:31:34.665673    2240 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa...
	I1210 07:31:37.044619    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:31:37.095607    2240 machine.go:94] provisionDockerMachine start ...
	I1210 07:31:37.098607    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:37.155601    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:37.171620    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:37.171620    2240 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:31:37.347331    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-648600
	
	I1210 07:31:37.347331    2240 ubuntu.go:182] provisioning hostname "custom-flannel-648600"
	I1210 07:31:37.350327    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:37.408671    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:37.409222    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:37.409222    2240 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-648600 && echo "custom-flannel-648600" | sudo tee /etc/hostname
	W1210 07:31:33.500806    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:31:33.522798    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:33.542801    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:33.574796    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.574796    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:33.577799    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:33.609805    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.609805    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:33.613806    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:33.647528    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.647528    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:33.650525    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:33.682527    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.683531    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:33.686536    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:33.715528    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.715528    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:33.718520    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:33.752522    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.752522    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:33.755526    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:33.789961    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.789961    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:33.794804    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:33.824805    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.824805    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:33.824805    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:33.824805    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:33.908771    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:33.908771    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:33.958763    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:33.958763    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:34.080194    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:34.067865    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.069363    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.070689    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.072220    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.073030    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:34.067865    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.069363    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.070689    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.072220    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.073030    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:34.080194    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:34.080194    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:34.114208    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:34.114208    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:36.683658    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:36.704830    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:36.739690    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.739690    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:36.742694    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:36.772249    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.772249    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:36.776265    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:36.812803    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.812803    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:36.816811    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:36.849259    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.849259    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:36.852518    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:36.890605    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.890605    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:36.895610    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:36.937605    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.937605    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:36.942601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:36.979599    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.979599    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:36.984601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:37.022606    1436 logs.go:282] 0 containers: []
	W1210 07:31:37.022606    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:37.022606    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:37.022606    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:37.086612    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:37.086612    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:37.128602    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:37.128602    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:37.225605    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:37.215773    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.216591    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.218566    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.219365    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.221626    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:37.215773    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.216591    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.218566    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.219365    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.221626    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:37.225605    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:37.225605    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:37.254615    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:37.254615    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:37.617301    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-648600
	
	I1210 07:31:37.621329    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:37.680493    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:37.681514    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:37.681514    2240 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-648600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-648600/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-648600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:31:37.850452    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:31:37.850452    2240 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 07:31:37.850452    2240 ubuntu.go:190] setting up certificates
	I1210 07:31:37.850452    2240 provision.go:84] configureAuth start
	I1210 07:31:37.855263    2240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-648600
	I1210 07:31:37.926854    2240 provision.go:143] copyHostCerts
	I1210 07:31:37.927569    2240 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 07:31:37.927608    2240 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 07:31:37.928059    2240 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 07:31:37.928961    2240 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 07:31:37.928961    2240 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 07:31:37.928961    2240 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 07:31:37.930358    2240 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 07:31:37.930390    2240 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 07:31:37.930744    2240 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 07:31:37.931754    2240 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.custom-flannel-648600 san=[127.0.0.1 192.168.85.2 custom-flannel-648600 localhost minikube]
	I1210 07:31:38.038131    2240 provision.go:177] copyRemoteCerts
	I1210 07:31:38.042277    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:31:38.045314    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.098793    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:38.243502    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:31:38.284050    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I1210 07:31:38.320436    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:31:38.351829    2240 provision.go:87] duration metric: took 501.3694ms to configureAuth
	I1210 07:31:38.351829    2240 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:31:38.352840    2240 config.go:182] Loaded profile config "custom-flannel-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:31:38.355824    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.405824    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:38.405824    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:38.405824    2240 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 07:31:38.582107    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 07:31:38.582107    2240 ubuntu.go:71] root file system type: overlay
	I1210 07:31:38.582107    2240 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 07:31:38.585874    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.646407    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:38.646407    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:38.646407    2240 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 07:31:38.847766    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 07:31:38.852241    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.938899    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:38.938899    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:38.938899    2240 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 07:31:40.711527    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-10 07:31:38.832035101 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1210 07:31:40.711665    2240 machine.go:97] duration metric: took 3.616002s to provisionDockerMachine
	I1210 07:31:40.711665    2240 client.go:176] duration metric: took 12.1183047s to LocalClient.Create
	I1210 07:31:40.711665    2240 start.go:167] duration metric: took 12.1183047s to libmachine.API.Create "custom-flannel-648600"
	I1210 07:31:40.711665    2240 start.go:293] postStartSetup for "custom-flannel-648600" (driver="docker")
	I1210 07:31:40.711665    2240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:31:40.715645    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:31:40.718723    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:40.776513    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:40.917451    2240 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:31:40.923444    2240 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:31:40.923444    2240 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:31:40.923444    2240 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 07:31:40.924452    2240 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 07:31:40.924452    2240 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 07:31:40.929458    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:31:40.942452    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 07:31:40.977491    2240 start.go:296] duration metric: took 265.8211ms for postStartSetup
	I1210 07:31:40.981481    2240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-648600
	I1210 07:31:41.034489    2240 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\config.json ...
	I1210 07:31:41.039496    2240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:31:41.043532    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:41.111672    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:41.255080    2240 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:31:41.269938    2240 start.go:128] duration metric: took 12.6809984s to createHost
	I1210 07:31:41.269938    2240 start.go:83] releasing machines lock for "custom-flannel-648600", held for 12.6812262s
	I1210 07:31:41.273664    2240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-648600
	I1210 07:31:41.324666    2240 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 07:31:41.329678    2240 ssh_runner.go:195] Run: cat /version.json
	I1210 07:31:41.329678    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:41.334670    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:41.381680    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:41.381680    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	W1210 07:31:41.497715    2240 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 07:31:41.501431    2240 ssh_runner.go:195] Run: systemctl --version
	I1210 07:31:41.518880    2240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:31:41.528176    2240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:31:41.531184    2240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:31:41.579185    2240 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 07:31:41.579185    2240 start.go:496] detecting cgroup driver to use...
	I1210 07:31:41.579185    2240 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:31:41.579185    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1210 07:31:41.596178    2240 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 07:31:41.596178    2240 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 07:31:41.606178    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:31:41.626187    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:31:41.641198    2240 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:31:41.645182    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:31:41.668187    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:31:41.687179    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:31:41.706179    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:31:41.724180    2240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:31:41.742180    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:31:41.759185    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:31:41.778184    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:31:41.795180    2240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:31:41.811185    2240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:31:41.828187    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:41.983806    2240 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:31:42.163822    2240 start.go:496] detecting cgroup driver to use...
	I1210 07:31:42.163822    2240 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:31:42.167818    2240 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 07:31:42.193819    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:31:42.216825    2240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:31:42.280833    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:31:42.301820    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:31:42.320823    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:31:42.345832    2240 ssh_runner.go:195] Run: which cri-dockerd
	I1210 07:31:42.358831    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 07:31:42.373835    2240 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 07:31:42.401822    2240 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 07:31:39.808959    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:39.828946    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:39.859949    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.859949    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:39.862944    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:39.896961    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.896961    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:39.901952    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:39.936950    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.936950    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:39.939955    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:39.969949    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.969949    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:39.972954    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:40.002949    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.002949    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:40.006946    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:40.036957    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.036957    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:40.039947    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:40.098959    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.098959    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:40.102955    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:40.149957    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.149957    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:40.149957    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:40.149957    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:40.191850    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:40.192845    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:40.293665    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:40.277190    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283148    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283982    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.286428    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.287149    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:40.277190    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283148    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283982    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.286428    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.287149    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:40.293665    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:40.293665    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:40.325883    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:40.325883    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:40.379885    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:40.379885    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:42.947835    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:42.966833    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:43.000857    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.000857    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:43.003835    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:43.034830    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.034830    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:43.037843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:43.069836    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.069836    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:43.073842    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:43.105424    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.105465    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:43.109492    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:43.143411    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.143411    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:43.147409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:43.179168    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.179168    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:43.183167    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:43.211281    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.211281    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:43.214141    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:43.248141    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.248141    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:43.248141    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:43.248141    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:43.314876    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:43.314876    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:43.357233    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:43.357233    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 07:31:42.551686    2240 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 07:31:42.712827    2240 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 07:31:42.712827    2240 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 07:31:42.735824    2240 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 07:31:42.756828    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:42.906845    2240 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 07:31:43.937123    2240 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0302614s)
	I1210 07:31:43.944887    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:31:43.971819    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 07:31:43.996364    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:31:44.030377    2240 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 07:31:44.173489    2240 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 07:31:44.332105    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:44.483148    2240 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 07:31:44.509404    2240 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 07:31:44.533765    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:44.690011    2240 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 07:31:44.790147    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:31:44.810716    2240 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 07:31:44.813714    2240 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 07:31:44.820719    2240 start.go:564] Will wait 60s for crictl version
	I1210 07:31:44.824717    2240 ssh_runner.go:195] Run: which crictl
	I1210 07:31:44.835701    2240 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:31:44.880457    2240 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 07:31:44.883920    2240 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:31:44.928460    2240 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:31:45.060104    2240 out.go:252] * Preparing Kubernetes v1.34.3 on Docker 29.1.2 ...
	I1210 07:31:45.062900    2240 cli_runner.go:164] Run: docker exec -t custom-flannel-648600 dig +short host.docker.internal
	I1210 07:31:45.193754    2240 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 07:31:45.197851    2240 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 07:31:45.204880    2240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:31:45.225085    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:45.282870    2240 kubeadm.go:884] updating cluster {Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:31:45.283875    2240 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:31:45.286873    2240 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 07:31:45.317078    2240 docker.go:691] Got preloaded images: 
	I1210 07:31:45.317078    2240 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.3 wasn't preloaded
	I1210 07:31:45.317078    2240 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.3 registry.k8s.io/kube-controller-manager:v1.34.3 registry.k8s.io/kube-scheduler:v1.34.3 registry.k8s.io/kube-proxy:v1.34.3 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 07:31:45.330428    2240 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:45.336331    2240 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.341435    2240 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:45.341435    2240 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.347452    2240 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.347452    2240 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.352434    2240 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:45.355426    2240 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.358455    2240 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:45.361429    2240 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.365434    2240 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 07:31:45.366439    2240 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:45.369440    2240 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:45.370428    2240 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:45.374431    2240 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 07:31:45.379430    2240 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	W1210 07:31:45.411422    2240 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.466193    2240 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.518621    2240 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.573883    2240 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.622874    2240 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.672905    2240 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.723034    2240 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.771034    2240 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.12.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 07:31:45.842424    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.842823    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.869734    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.884370    2240 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.3" does not exist at hash "aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c" in container runtime
	I1210 07:31:45.884370    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:31:45.884370    2240 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.3" needs transfer: "registry.k8s.io/kube-proxy:v1.34.3" does not exist at hash "36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691" in container runtime
	I1210 07:31:45.884370    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:31:45.884370    2240 docker.go:338] Removing image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.884370    2240 docker.go:338] Removing image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.890739    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.890951    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.897121    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:45.901151    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:45.922366    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 07:31:45.956325    2240 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.3" does not exist at hash "5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942" in container runtime
	I1210 07:31:45.956325    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:31:45.956325    2240 docker.go:338] Removing image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.961320    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.992754    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:46.045432    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:31:46.045432    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:31:46.053082    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:31:46.053082    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:31:46.059786    2240 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.3" does not exist at hash "aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78" in container runtime
	I1210 07:31:46.059786    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:31:46.059786    2240 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 07:31:46.060783    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:31:46.060783    2240 docker.go:338] Removing image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:46.060783    2240 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:46.065694    2240 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 07:31:46.065694    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:31:46.065694    2240 docker.go:338] Removing image: registry.k8s.io/pause:3.10.1
	I1210 07:31:46.067530    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:46.067911    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:46.068609    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:31:46.070610    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1210 07:31:46.073597    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:46.074603    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:31:46.146816    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.3': No such file or directory
	I1210 07:31:46.146816    2240 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1210 07:31:46.146816    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.3': No such file or directory
	I1210 07:31:46.146816    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:31:46.147805    2240 docker.go:338] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:46.147805    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 --> /var/lib/minikube/images/kube-apiserver_v1.34.3 (27075584 bytes)
	I1210 07:31:46.147805    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 --> /var/lib/minikube/images/kube-proxy_v1.34.3 (25966592 bytes)
	I1210 07:31:46.151807    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:46.255114    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:31:46.255114    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:31:46.261151    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:31:46.262119    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:31:46.272115    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:31:46.272115    2240 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 07:31:46.272115    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:31:46.272115    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.3': No such file or directory
	I1210 07:31:46.272115    2240 docker.go:338] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:46.272115    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 --> /var/lib/minikube/images/kube-controller-manager_v1.34.3 (22830080 bytes)
	I1210 07:31:46.277116    2240 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:46.278121    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 07:31:46.289109    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:31:46.293116    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:31:46.475787    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.3': No such file or directory
	I1210 07:31:46.475787    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 07:31:46.475787    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 07:31:46.475787    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 --> /var/lib/minikube/images/kube-scheduler_v1.34.3 (17393664 bytes)
	I1210 07:31:46.476808    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 07:31:46.476808    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:31:46.476808    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 07:31:46.476808    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1210 07:31:46.476808    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1210 07:31:46.481795    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:31:46.504793    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 07:31:46.504793    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 07:31:46.672791    2240 docker.go:305] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 07:31:46.672791    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1210 07:31:47.172597    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 from cache
	I1210 07:31:47.208589    2240 docker.go:305] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:31:47.208589    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	W1210 07:31:43.531620    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:31:43.451546    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:43.441909    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443033    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443856    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.446062    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.447664    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:43.441909    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443033    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443856    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.446062    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.447664    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:43.452560    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:43.452560    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:43.479539    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:43.479539    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:46.056731    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:46.081601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:46.111531    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.111531    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:46.116512    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:46.149808    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.149808    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:46.155807    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:46.190791    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.190791    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:46.193789    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:46.232109    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.232109    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:46.235109    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:46.269122    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.269122    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:46.273122    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:46.302130    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.302130    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:46.306119    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:46.338110    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.338110    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:46.341114    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:46.370305    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.370305    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:46.370305    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:46.370305    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:46.438787    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:46.438787    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:46.605791    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:46.605791    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:46.756762    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:46.747167    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.748310    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.749473    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.750856    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.751642    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:46.747167    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.748310    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.749473    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.750856    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.751642    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:46.756762    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:46.756762    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:46.793764    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:46.793764    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:48.287161    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load": (1.0785558s)
	I1210 07:31:48.287161    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I1210 07:31:48.287161    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:31:48.287161    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.34.3 | docker load"
	I1210 07:31:51.130300    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.34.3 | docker load": (2.8430943s)
	I1210 07:31:51.130300    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 from cache
	I1210 07:31:51.130300    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:31:51.130300    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.34.3 | docker load"
	I1210 07:31:52.383759    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.34.3 | docker load": (1.2534401s)
	I1210 07:31:52.383759    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 from cache
	I1210 07:31:52.383759    2240 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:31:52.383759    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	I1210 07:31:49.381174    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:49.403703    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:49.436264    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.436317    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:49.440617    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:49.468917    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.468982    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:49.472677    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:49.499977    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.499977    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:49.504116    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:49.536309    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.536350    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:49.540463    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:49.568274    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.568274    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:49.572177    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:49.600130    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.600130    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:49.604000    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:49.632645    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.632645    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:49.636092    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:49.667017    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.667017    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:49.667017    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:49.667017    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:49.705515    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:49.705515    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:49.790780    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:49.782366    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.783658    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.784961    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.786128    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.787161    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:49.782366    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.783658    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.784961    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.786128    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.787161    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:49.790780    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:49.790780    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:49.817781    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:49.817781    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:49.871600    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:49.871674    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:52.448511    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:52.475325    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:52.506360    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.506360    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:52.510172    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:52.540147    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.540147    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:52.544437    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:52.575774    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.575774    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:52.579336    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:52.610061    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.610061    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:52.613342    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:52.642765    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.642765    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:52.649215    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:52.678701    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.678701    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:52.682526    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:52.710203    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.710203    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:52.715870    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:52.745326    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.745351    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:52.745351    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:52.745397    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:52.811401    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:52.811401    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:52.853138    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:52.853138    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:52.968335    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:52.959589    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.960835    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.961833    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.962872    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.963541    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:52.959589    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.960835    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.961833    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.962872    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.963541    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:52.968335    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:52.968335    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:52.995279    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:52.995802    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:55.245680    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (2.8618761s)
	I1210 07:31:55.245680    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1210 07:31:55.246466    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:31:55.246522    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.34.3 | docker load"
	I1210 07:31:56.790187    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.34.3 | docker load": (1.5436405s)
	I1210 07:31:56.790187    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 from cache
	I1210 07:31:56.790187    2240 docker.go:305] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:31:56.790187    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.12.1 | docker load"
	W1210 07:31:53.564945    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:31:55.548093    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:55.571449    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:55.603901    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.603970    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:55.607695    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:55.639065    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.639065    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:55.643536    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:55.671930    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.671930    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:55.675998    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:55.704460    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.704460    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:55.708947    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:55.739257    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.739257    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:55.742852    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:55.772295    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.772344    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:55.776423    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:55.803812    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.803812    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:55.809849    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:55.841586    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.841647    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:55.841647    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:55.841647    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:55.916368    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:55.916368    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:55.958653    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:55.958653    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:56.055702    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:56.041972    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.043936    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.045926    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.047763    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.048444    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:56.041972    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.043936    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.045926    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.047763    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.048444    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:56.055702    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:56.055702    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:56.084883    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:56.084883    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:01.290113    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.12.1 | docker load": (4.4998566s)
	I1210 07:32:01.290113    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 from cache
	I1210 07:32:01.290113    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:32:01.290113    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.34.3 | docker load"
	I1210 07:31:58.642350    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:58.668189    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:58.699633    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.699633    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:58.705036    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:58.738553    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.738553    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:58.742579    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:58.772414    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.772414    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:58.775757    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:58.804872    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.804872    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:58.808509    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:58.835398    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.835398    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:58.843124    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:58.871465    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.871465    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:58.875535    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:58.905029    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.905108    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:58.910324    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:58.953100    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.953100    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:58.953100    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:58.953100    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:59.012946    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:59.012946    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:59.052964    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:59.052964    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:59.146228    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:59.133962    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.135271    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.136467    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.137230    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.139872    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:59.133962    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.135271    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.136467    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.137230    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.139872    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:59.146228    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:59.146228    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:59.173200    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:59.173200    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:01.725170    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:01.746739    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:01.779670    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.779670    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:01.783967    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:01.812617    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.812617    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:01.817482    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:01.848083    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.848083    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:01.852344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:01.883648    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.883648    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:01.887655    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:01.918403    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.918403    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:01.922409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:01.961721    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.961721    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:01.969744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:01.998302    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.998302    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:02.003804    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:02.032315    1436 logs.go:282] 0 containers: []
	W1210 07:32:02.032315    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:02.032315    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:02.032315    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:02.096900    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:02.096900    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:02.136137    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:02.136137    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:02.227732    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:02.216961    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.218197    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.219273    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.220321    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.221438    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:02.216961    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.218197    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.219273    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.220321    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.221438    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:02.227732    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:02.227732    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:02.255236    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:02.255236    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:03.670542    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.34.3 | docker load": (2.3803916s)
	I1210 07:32:03.670542    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 from cache
	I1210 07:32:03.670542    2240 cache_images.go:125] Successfully loaded all cached images
	I1210 07:32:03.670542    2240 cache_images.go:94] duration metric: took 18.3531776s to LoadCachedImages
	I1210 07:32:03.670542    2240 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 docker true true} ...
	I1210 07:32:03.670542    2240 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-648600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml}
	I1210 07:32:03.674057    2240 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 07:32:03.753844    2240 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1210 07:32:03.753844    2240 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:32:03.753844    2240 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-648600 NodeName:custom-flannel-648600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:32:03.753844    2240 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "custom-flannel-648600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:32:03.758233    2240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 07:32:03.772950    2240 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.3': No such file or directory
	
	Initiating transfer...
	I1210 07:32:03.777455    2240 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.3
	I1210 07:32:03.790145    2240 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet.sha256
	I1210 07:32:03.790145    2240 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 07:32:03.790145    2240 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
	I1210 07:32:03.796039    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:32:03.796814    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm
	I1210 07:32:03.796843    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl
	I1210 07:32:03.817843    2240 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubeadm': No such file or directory
	I1210 07:32:03.818011    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubeadm --> /var/lib/minikube/binaries/v1.34.3/kubeadm (74027192 bytes)
	I1210 07:32:03.818298    2240 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubectl': No such file or directory
	I1210 07:32:03.818803    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubectl --> /var/lib/minikube/binaries/v1.34.3/kubectl (60563640 bytes)
	I1210 07:32:03.822978    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet
	I1210 07:32:03.833074    2240 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubelet': No such file or directory
	I1210 07:32:03.833638    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubelet --> /var/lib/minikube/binaries/v1.34.3/kubelet (59203876 bytes)
	I1210 07:32:05.838364    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:32:05.850364    2240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1210 07:32:05.870151    2240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 07:32:05.891336    2240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1210 07:32:05.915010    2240 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:32:05.922767    2240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:32:05.942185    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:32:06.099167    2240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:32:06.121581    2240 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600 for IP: 192.168.85.2
	I1210 07:32:06.121613    2240 certs.go:195] generating shared ca certs ...
	I1210 07:32:06.121640    2240 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.121920    2240 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 07:32:06.122447    2240 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 07:32:06.122578    2240 certs.go:257] generating profile certs ...
	I1210 07:32:06.122578    2240 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.key
	I1210 07:32:06.122578    2240 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.crt with IP's: []
	I1210 07:32:06.321440    2240 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.crt ...
	I1210 07:32:06.321440    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.crt: {Name:mk30a4977cc0d8ffd50678b3c23caa1e53531dd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.322223    2240 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.key ...
	I1210 07:32:06.322223    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.key: {Name:mke10982a653bbe15c8edebf2f43dc216f9268be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.323200    2240 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba
	I1210 07:32:06.323200    2240 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1210 07:32:06.341062    2240 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba ...
	I1210 07:32:06.341062    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba: {Name:mk0e9e825524eecc7aedfd18bb3bfe0b08c0466c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.342014    2240 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba ...
	I1210 07:32:06.342014    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba: {Name:mk42b80e536f4c7e07cd83fa60afbb5af1e6e8b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.342947    2240 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt
	I1210 07:32:06.354920    2240 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key
	I1210 07:32:06.355812    2240 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key
	I1210 07:32:06.355812    2240 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt with IP's: []
	I1210 07:32:06.438517    2240 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt ...
	I1210 07:32:06.438517    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt: {Name:mk49d63357d91f886b5db1adca8a8959ac8a2637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.439596    2240 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key ...
	I1210 07:32:06.439596    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key: {Name:mkd00fe816a16ba7636ee1faff5584095510b505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.454147    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 07:32:06.454968    2240 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 07:32:06.454968    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 07:32:06.455228    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 07:32:06.455417    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 07:32:06.455581    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 07:32:06.455768    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 07:32:06.456703    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:32:06.490234    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:32:06.516382    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:32:06.546895    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:32:06.579157    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1210 07:32:06.611194    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:32:06.642582    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:32:06.673947    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:32:06.702762    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 07:32:06.734932    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 07:32:06.763895    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:32:06.794884    2240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:32:06.824804    2240 ssh_runner.go:195] Run: openssl version
	I1210 07:32:06.839620    2240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.863187    2240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 07:32:06.881235    2240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.889982    2240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.896266    2240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.945361    2240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:32:06.965592    2240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11304.pem /etc/ssl/certs/51391683.0
	I1210 07:32:06.982615    2240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.000345    2240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 07:32:07.019650    2240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.028440    2240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.032681    2240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.080664    2240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:32:07.098781    2240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/113042.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:32:07.119820    2240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.138968    2240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:32:07.157588    2240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.166110    2240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.169123    2240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.218939    2240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:32:07.238245    2240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:32:07.255844    2240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:32:07.263714    2240 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:32:07.263714    2240 kubeadm.go:401] StartCluster: {Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCore
DNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:32:07.267520    2240 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 07:32:07.300048    2240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:32:07.317060    2240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:32:07.333647    2240 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:32:07.337744    2240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:32:07.353638    2240 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:32:07.353638    2240 kubeadm.go:158] found existing configuration files:
	
	I1210 07:32:07.357869    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:32:07.371538    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:32:07.375620    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:32:07.392582    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:32:07.408459    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:32:07.412872    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:32:07.431340    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:32:07.446697    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:32:07.451332    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:32:07.472431    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	W1210 07:32:03.602967    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:04.810034    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:04.838035    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:04.888039    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.888039    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:04.892025    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:04.955032    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.955032    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:04.959038    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:04.995031    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.995031    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:04.999034    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:05.035036    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.035036    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:05.040047    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:05.079034    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.079034    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:05.084038    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:05.123032    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.123032    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:05.128035    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:05.165033    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.165033    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:05.169028    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:05.205183    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.205183    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:05.205183    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:05.205183    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:05.248358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:05.248358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:05.349366    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:05.339165    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.340548    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.341613    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.342697    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.343919    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:05.339165    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.340548    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.341613    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.342697    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.343919    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:05.349366    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:05.349366    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:05.384377    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:05.384377    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:05.439383    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:05.439383    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:08.021198    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:08.045549    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:08.076568    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.076568    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:08.082429    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:08.113514    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.113514    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:08.117280    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:08.145243    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.145243    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:08.151846    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:08.182475    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.182475    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:08.186570    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:08.214500    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.214554    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:08.218698    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:08.250229    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.250229    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:08.254493    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:08.298394    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.298394    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:08.302457    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:08.331561    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.331561    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:08.331561    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:08.331561    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:08.368913    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:08.368913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 07:32:07.487983    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:32:07.492242    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:32:07.510557    2240 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:32:07.626646    2240 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 07:32:07.630270    2240 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1210 07:32:07.725615    2240 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1210 07:32:08.453343    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:08.442562    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.443722    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.445918    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.447466    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.448822    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:08.442562    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.443722    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.445918    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.447466    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.448822    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:08.453378    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:08.453417    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:08.488219    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:08.488219    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:08.533777    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:08.533777    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:11.100898    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:11.123310    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:11.154369    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.154369    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:11.158211    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:11.188349    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.188419    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:11.191999    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:11.218233    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.218263    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:11.222177    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:11.248157    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.248157    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:11.252075    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:11.280934    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.280934    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:11.284871    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:11.316173    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.316225    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:11.320150    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:11.350432    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.350494    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:11.354282    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:11.381767    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.381819    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:11.381819    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:11.381874    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:11.447079    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:11.447079    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:11.485987    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:11.485987    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:11.568313    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:11.555927    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.557482    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.559214    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.561941    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.562681    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:11.555927    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.557482    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.559214    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.561941    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.562681    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:11.568365    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:11.568408    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:11.599474    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:11.599518    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:13.641314    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:14.165429    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:14.189363    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:14.220411    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.220478    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:14.223878    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:14.253748    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.253798    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:14.257409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:14.288235    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.288235    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:14.291689    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:14.323349    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.323349    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:14.326680    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:14.355227    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.355227    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:14.358704    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:14.389648    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.389648    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:14.393032    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:14.424212    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.424212    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:14.427425    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:14.457834    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.457834    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:14.457834    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:14.457834    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:14.486053    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:14.486053    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:14.538138    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:14.538138    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:14.601542    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:14.601542    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:14.638885    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:14.638885    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:14.724482    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:14.715398    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.716451    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.717228    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.719965    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.720942    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:14.715398    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.716451    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.717228    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.719965    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.720942    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:17.229775    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:17.254115    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:17.287113    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.287113    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:17.292389    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:17.321661    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.321661    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:17.325615    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:17.360140    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.360140    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:17.366346    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:17.402963    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.402963    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:17.406830    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:17.436210    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.436210    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:17.440638    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:17.468315    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.468315    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:17.473002    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:17.516057    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.516057    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:17.519835    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:17.546705    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.546705    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:17.546705    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:17.546705    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:17.575272    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:17.575272    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:17.635882    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:17.635882    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:17.702984    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:17.702984    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:17.738444    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:17.738444    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:17.826329    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:17.816355    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.817532    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.818909    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.821510    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.822595    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:17.816355    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.817532    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.818909    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.821510    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.822595    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:20.331491    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:20.356562    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:20.393733    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.393733    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:20.397542    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:20.424969    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.424969    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:20.430097    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:20.461163    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.461163    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:20.464553    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:20.496041    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.496041    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:20.500386    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:20.528481    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.528481    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:20.533192    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:20.563678    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.563678    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:20.567914    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:20.595909    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.595909    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:20.601427    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:20.633125    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.633125    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:20.633125    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:20.633125    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:20.698742    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:20.698742    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:20.738675    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:20.738675    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:20.832925    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:20.823171    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.824395    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.825664    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.826500    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.828755    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:20.823171    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.824395    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.825664    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.826500    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.828755    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:20.833019    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:20.833050    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:20.863741    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:20.863802    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:23.679657    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:23.424742    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:23.449719    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:23.484921    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.484982    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:23.488818    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:23.520632    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.520718    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:23.525648    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:23.557856    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.557856    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:23.561789    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:23.593782    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.593782    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:23.596770    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:23.629689    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.629689    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:23.633972    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:23.677648    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.677648    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:23.681665    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:23.708735    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.708735    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:23.712484    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:23.742324    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.742324    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:23.742324    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:23.742324    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:23.809315    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:23.809315    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:23.849820    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:23.849820    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:23.932812    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:23.923514    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.925667    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.926732    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.928140    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.929123    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:23.923514    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.925667    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.926732    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.928140    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.929123    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:23.932860    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:23.932896    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:23.962977    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:23.962977    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:26.517198    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:26.545066    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:26.577323    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.577323    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:26.581824    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:26.621178    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.621178    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:26.624162    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:26.657711    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.657711    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:26.661872    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:26.690869    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.690869    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:26.693873    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:26.720949    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.720949    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:26.724289    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:26.757254    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.757254    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:26.761433    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:26.788617    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.788617    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:26.792015    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:26.820229    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.820229    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:26.820229    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:26.820229    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:26.886805    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:26.886805    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:26.926531    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:26.926531    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:27.014343    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:27.001829    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.003812    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.004706    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.007231    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.008445    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:27.001829    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.003812    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.004706    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.007231    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.008445    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:27.014420    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:27.014490    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:27.043375    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:27.043375    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:29.223517    2240 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1210 07:32:29.223517    2240 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:32:29.223517    2240 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:32:29.223517    2240 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:32:29.224269    2240 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:32:29.224467    2240 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:32:29.229027    2240 out.go:252]   - Generating certificates and keys ...
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:32:29.229660    2240 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:32:29.229827    2240 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:32:29.229911    2240 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:32:29.229911    2240 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-648600 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 07:32:29.229911    2240 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:32:29.230468    2240 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-648600 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 07:32:29.230658    2240 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:32:29.230768    2240 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:32:29.230900    2240 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:32:29.231503    2240 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:32:29.231582    2240 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:32:29.231582    2240 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:32:29.234181    2240 out.go:252]   - Booting up control plane ...
	I1210 07:32:29.234181    2240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:32:29.234181    2240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:32:29.234181    2240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:32:29.234702    2240 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:32:29.234874    2240 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:32:29.235782    2240 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:32:29.235782    2240 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002366911s
	I1210 07:32:29.235782    2240 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 07:32:29.235782    2240 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1210 07:32:29.236404    2240 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 07:32:29.236404    2240 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 07:32:29.236404    2240 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 12.235267696s
	I1210 07:32:29.236992    2240 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 12.434241439s
	I1210 07:32:29.236992    2240 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 14.5023353s
	I1210 07:32:29.236992    2240 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 07:32:29.236992    2240 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 07:32:29.237590    2240 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 07:32:29.237590    2240 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-648600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 07:32:29.237590    2240 kubeadm.go:319] [bootstrap-token] Using token: a4ld74.20ve6i3rm5ksexxo
	I1210 07:32:29.239648    2240 out.go:252]   - Configuring RBAC rules ...
	I1210 07:32:29.239648    2240 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 07:32:29.240674    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 07:32:29.240944    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 07:32:29.241383    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 07:32:29.241649    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 07:32:29.241668    2240 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 07:32:29.241668    2240 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 07:32:29.241668    2240 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 07:32:29.241668    2240 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 07:32:29.242197    2240 kubeadm.go:319] 
	I1210 07:32:29.242265    2240 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 07:32:29.242265    2240 kubeadm.go:319] 
	I1210 07:32:29.242265    2240 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 07:32:29.242265    2240 kubeadm.go:319] 
	I1210 07:32:29.242265    2240 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 07:32:29.242265    2240 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 07:32:29.242265    2240 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 07:32:29.242265    2240 kubeadm.go:319] 
	I1210 07:32:29.242850    2240 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 07:32:29.242850    2240 kubeadm.go:319] 
	I1210 07:32:29.242850    2240 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 07:32:29.242850    2240 kubeadm.go:319] 
	I1210 07:32:29.242850    2240 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 07:32:29.242850    2240 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 07:32:29.242850    2240 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 07:32:29.242850    2240 kubeadm.go:319] 
	I1210 07:32:29.243436    2240 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 07:32:29.243436    2240 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 07:32:29.243436    2240 kubeadm.go:319] 
	I1210 07:32:29.243436    2240 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token a4ld74.20ve6i3rm5ksexxo \
	I1210 07:32:29.243436    2240 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2501b4ef72a9792bee16677b10e6bb41bd6980e44395cdcd843b1bcd1dba3c3b \
	I1210 07:32:29.243436    2240 kubeadm.go:319] 	--control-plane 
	I1210 07:32:29.243436    2240 kubeadm.go:319] 
	I1210 07:32:29.244018    2240 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 07:32:29.244018    2240 kubeadm.go:319] 
	I1210 07:32:29.244018    2240 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token a4ld74.20ve6i3rm5ksexxo \
	I1210 07:32:29.244018    2240 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2501b4ef72a9792bee16677b10e6bb41bd6980e44395cdcd843b1bcd1dba3c3b 
	I1210 07:32:29.244018    2240 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1210 07:32:29.246745    2240 out.go:179] * Configuring testdata\kube-flannel.yaml (Container Networking Interface) ...
	I1210 07:32:29.266121    2240 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1210 07:32:29.270492    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1210 07:32:29.280075    2240 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1210 07:32:29.280075    2240 ssh_runner.go:362] scp testdata\kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4578 bytes)
	I1210 07:32:29.314572    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 07:32:29.754597    2240 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 07:32:29.759607    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-648600 minikube.k8s.io/updated_at=2025_12_10T07_32_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=custom-flannel-648600 minikube.k8s.io/primary=true
	I1210 07:32:29.759607    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:29.770603    2240 ops.go:34] apiserver oom_adj: -16
	I1210 07:32:29.895974    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:30.395328    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:30.896828    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:31.396414    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:31.896200    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:32.396778    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:29.599594    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:29.627372    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:29.659982    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.659982    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:29.662983    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:29.694702    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.694702    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:29.700318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:29.732602    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.732602    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:29.735594    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:29.769602    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.769602    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:29.773601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:29.805199    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.805199    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:29.808179    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:29.838578    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.838578    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:29.843641    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:29.878051    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.878051    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:29.881052    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:29.921782    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.921782    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:29.921782    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:29.921782    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:29.991328    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:29.991328    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:30.030358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:30.031358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:30.117974    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:30.107541    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.108605    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.109589    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.110674    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.111579    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:30.107541    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.108605    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.109589    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.110674    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.111579    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:30.118027    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:30.118027    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:30.147934    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:30.147934    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:32.704372    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:32.727813    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:32.762114    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.762228    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:32.767248    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:32.801905    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.801968    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:32.805939    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:32.836433    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.836579    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:32.840369    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:32.870265    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.870265    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:32.874049    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:32.904540    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.904540    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:32.908658    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:32.937325    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.937407    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:32.941191    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:32.974829    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.974893    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:32.980307    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:33.012207    1436 logs.go:282] 0 containers: []
	W1210 07:32:33.012268    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:33.012288    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:33.012288    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:33.062151    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:33.062151    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:33.126084    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:33.126084    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:33.164564    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:33.164564    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:33.252175    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:33.238911    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.239860    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.243396    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.244685    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.245447    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:33.238911    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.239860    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.243396    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.244685    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.245447    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:33.252175    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:33.252175    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:32.894984    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:33.397040    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:33.895777    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:34.084987    2240 kubeadm.go:1114] duration metric: took 4.3302518s to wait for elevateKubeSystemPrivileges
	I1210 07:32:34.085013    2240 kubeadm.go:403] duration metric: took 26.8208803s to StartCluster
	I1210 07:32:34.085095    2240 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:34.085299    2240 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:32:34.087295    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:34.088397    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 07:32:34.088397    2240 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:32:34.088932    2240 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:32:34.089115    2240 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-648600"
	I1210 07:32:34.089115    2240 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-648600"
	I1210 07:32:34.089115    2240 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-648600"
	I1210 07:32:34.089115    2240 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-648600"
	I1210 07:32:34.089272    2240 host.go:66] Checking if "custom-flannel-648600" exists ...
	I1210 07:32:34.089454    2240 config.go:182] Loaded profile config "custom-flannel-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:32:34.091048    2240 out.go:179] * Verifying Kubernetes components...
	I1210 07:32:34.099313    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:32:34.100384    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:32:34.101389    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:32:34.165121    2240 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-648600"
	I1210 07:32:34.165121    2240 host.go:66] Checking if "custom-flannel-648600" exists ...
	I1210 07:32:34.166107    2240 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:32:34.174109    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:32:34.177116    2240 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:32:34.177116    2240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:32:34.181109    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:32:34.228110    2240 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:32:34.228110    2240 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:32:34.231111    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:32:34.232110    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:32:34.295102    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:32:34.361698    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 07:32:34.577307    2240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:32:34.743911    2240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:32:34.748484    2240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:32:35.145540    2240 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1210 07:32:35.149854    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:32:35.210514    2240 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-648600" to be "Ready" ...
	I1210 07:32:35.684992    2240 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-648600" context rescaled to 1 replicas
	I1210 07:32:35.860846    2240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1123448s)
	I1210 07:32:35.863841    2240 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1210 07:32:35.869842    2240 addons.go:530] duration metric: took 1.7814171s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1210 07:32:37.217134    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	W1210 07:32:33.712552    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:35.789401    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:35.810140    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:35.846049    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.846049    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:35.850173    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:35.881840    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.881840    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:35.884841    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:35.913190    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.913190    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:35.916698    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:35.953160    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.953160    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:35.956661    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:35.990725    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.990725    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:35.994362    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:36.027153    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.027153    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:36.031157    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:36.060142    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.060142    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:36.063139    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:36.096214    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.096291    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:36.096291    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:36.096291    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:36.136455    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:36.136455    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:36.228827    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:36.215892    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.217040    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.218413    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.219992    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.221006    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:36.215892    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.217040    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.218413    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.219992    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.221006    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:36.228910    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:36.228944    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:36.260979    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:36.261040    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:36.321946    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:36.321946    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:32:39.747934    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	W1210 07:32:42.215582    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	I1210 07:32:38.893525    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:38.918010    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:38.951682    1436 logs.go:282] 0 containers: []
	W1210 07:32:38.951682    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:38.954817    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:38.986714    1436 logs.go:282] 0 containers: []
	W1210 07:32:38.986714    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:38.992805    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:39.024242    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.024242    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:39.028333    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:39.057504    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.057504    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:39.063178    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:39.093362    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.093362    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:39.097488    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:39.130652    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.130690    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:39.133596    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:39.163556    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.163556    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:39.168915    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:39.202587    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.202587    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:39.202587    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:39.202587    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:39.268647    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:39.268647    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:39.308297    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:39.308297    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:39.438181    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:39.395105    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.396191    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.398354    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.399763    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.401460    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:39.395105    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.396191    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.398354    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.399763    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.401460    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:39.438181    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:39.438181    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:39.467128    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:39.467176    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:42.023591    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:42.047765    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:42.080166    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.080166    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:42.084928    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:42.114905    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.114905    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:42.118820    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:42.148212    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.148212    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:42.151728    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:42.182256    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.182256    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:42.185843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:42.216232    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.216276    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:42.219555    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:42.249214    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.249214    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:42.253469    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:42.281977    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.281977    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:42.285971    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:42.313212    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.314210    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:42.314210    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:42.314210    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:42.382226    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:42.382226    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:42.424358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:42.424358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:42.509116    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:42.500360    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.501554    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.503040    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.504307    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.505418    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:42.500360    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.501554    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.503040    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.504307    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.505418    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:42.509116    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:42.509116    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:42.536096    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:42.536096    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:44.217341    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	I1210 07:32:45.217929    2240 node_ready.go:49] node "custom-flannel-648600" is "Ready"
	I1210 07:32:45.217929    2240 node_ready.go:38] duration metric: took 10.0071872s for node "custom-flannel-648600" to be "Ready" ...
	I1210 07:32:45.217929    2240 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:32:45.221913    2240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:45.241224    2240 api_server.go:72] duration metric: took 11.1520714s to wait for apiserver process to appear ...
	I1210 07:32:45.241248    2240 api_server.go:88] waiting for apiserver healthz status ...
	I1210 07:32:45.241297    2240 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58199/healthz ...
	I1210 07:32:45.255531    2240 api_server.go:279] https://127.0.0.1:58199/healthz returned 200:
	ok
	I1210 07:32:45.259632    2240 api_server.go:141] control plane version: v1.34.3
	I1210 07:32:45.259696    2240 api_server.go:131] duration metric: took 18.4479ms to wait for apiserver health ...
	I1210 07:32:45.259716    2240 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 07:32:45.268791    2240 system_pods.go:59] 7 kube-system pods found
	I1210 07:32:45.268849    2240 system_pods.go:61] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.268849    2240 system_pods.go:61] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.268894    2240 system_pods.go:74] duration metric: took 9.14ms to wait for pod list to return data ...
	I1210 07:32:45.268935    2240 default_sa.go:34] waiting for default service account to be created ...
	I1210 07:32:45.273316    2240 default_sa.go:45] found service account: "default"
	I1210 07:32:45.273353    2240 default_sa.go:55] duration metric: took 4.4181ms for default service account to be created ...
	I1210 07:32:45.273353    2240 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 07:32:45.280767    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:45.280945    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.280945    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.281064    2240 retry.go:31] will retry after 250.377545ms: missing components: kube-dns
	I1210 07:32:45.539061    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:45.539616    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.539616    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.539718    2240 retry.go:31] will retry after 289.337772ms: missing components: kube-dns
	I1210 07:32:45.840329    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:45.840329    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.840329    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.840528    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.840528    2240 retry.go:31] will retry after 309.196772ms: missing components: kube-dns
	I1210 07:32:46.157293    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:46.157293    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:46.157293    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:46.157293    2240 retry.go:31] will retry after 407.04525ms: missing components: kube-dns
	I1210 07:32:46.592154    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:46.592265    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:46.592265    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:46.592297    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:46.592297    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:46.592318    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:46.592318    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:46.592318    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:46.592318    2240 retry.go:31] will retry after 495.94184ms: missing components: kube-dns
	I1210 07:32:47.094557    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:47.094557    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:47.094557    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:47.094557    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:47.094557    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:47.095074    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:47.095074    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:47.095074    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Running
	I1210 07:32:47.095074    2240 retry.go:31] will retry after 778.892273ms: missing components: kube-dns
	W1210 07:32:43.745046    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:45.087059    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:45.110662    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:45.142133    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.142133    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:45.146341    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:45.178232    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.178232    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:45.182428    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:45.211507    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.211507    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:45.215400    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:45.245805    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.246346    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:45.251790    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:45.299793    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.299793    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:45.304394    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:45.332689    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.332689    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:45.338438    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:45.371989    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.372039    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:45.376951    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:45.411498    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.411558    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:45.411558    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:45.411617    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:45.488591    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:45.489591    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:45.529135    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:45.529135    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:45.627238    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:45.616715    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.617907    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.618773    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.622470    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.623968    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:45.616715    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.617907    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.618773    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.622470    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.623968    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:45.627238    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:45.627238    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:45.659505    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:45.659505    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:48.224164    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:48.247748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:48.276146    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.276253    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:48.279224    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:48.307561    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.307587    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:48.311247    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:48.342268    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.342268    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:48.346481    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:48.379504    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.379504    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:48.384265    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:47.881744    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:47.881744    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:47.881744    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Running
	I1210 07:32:47.882297    2240 retry.go:31] will retry after 913.098856ms: missing components: kube-dns
	I1210 07:32:48.802046    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:48.802046    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Running
	I1210 07:32:48.802046    2240 system_pods.go:126] duration metric: took 3.5286376s to wait for k8s-apps to be running ...
	I1210 07:32:48.802046    2240 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 07:32:48.807470    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:32:48.825598    2240 system_svc.go:56] duration metric: took 23.5517ms WaitForService to wait for kubelet
	I1210 07:32:48.825598    2240 kubeadm.go:587] duration metric: took 14.7364354s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:32:48.825689    2240 node_conditions.go:102] verifying NodePressure condition ...
	I1210 07:32:48.831503    2240 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1210 07:32:48.831503    2240 node_conditions.go:123] node cpu capacity is 16
	I1210 07:32:48.831503    2240 node_conditions.go:105] duration metric: took 5.8138ms to run NodePressure ...
	I1210 07:32:48.831503    2240 start.go:242] waiting for startup goroutines ...
	I1210 07:32:48.831503    2240 start.go:247] waiting for cluster config update ...
	I1210 07:32:48.831503    2240 start.go:256] writing updated cluster config ...
	I1210 07:32:48.837195    2240 ssh_runner.go:195] Run: rm -f paused
	I1210 07:32:48.844148    2240 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:32:48.853005    2240 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dhgpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.864384    2240 pod_ready.go:94] pod "coredns-66bc5c9577-dhgpj" is "Ready"
	I1210 07:32:48.864472    2240 pod_ready.go:86] duration metric: took 11.4282ms for pod "coredns-66bc5c9577-dhgpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.867887    2240 pod_ready.go:83] waiting for pod "etcd-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.876367    2240 pod_ready.go:94] pod "etcd-custom-flannel-648600" is "Ready"
	I1210 07:32:48.876367    2240 pod_ready.go:86] duration metric: took 8.4794ms for pod "etcd-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.880884    2240 pod_ready.go:83] waiting for pod "kube-apiserver-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.888453    2240 pod_ready.go:94] pod "kube-apiserver-custom-flannel-648600" is "Ready"
	I1210 07:32:48.888453    2240 pod_ready.go:86] duration metric: took 7.5694ms for pod "kube-apiserver-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.891939    2240 pod_ready.go:83] waiting for pod "kube-controller-manager-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:49.254863    2240 pod_ready.go:94] pod "kube-controller-manager-custom-flannel-648600" is "Ready"
	I1210 07:32:49.255015    2240 pod_ready.go:86] duration metric: took 363.0699ms for pod "kube-controller-manager-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:49.454047    2240 pod_ready.go:83] waiting for pod "kube-proxy-vrrgr" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:49.854254    2240 pod_ready.go:94] pod "kube-proxy-vrrgr" is "Ready"
	I1210 07:32:49.854329    2240 pod_ready.go:86] duration metric: took 400.2758ms for pod "kube-proxy-vrrgr" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.054101    2240 pod_ready.go:83] waiting for pod "kube-scheduler-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.453713    2240 pod_ready.go:94] pod "kube-scheduler-custom-flannel-648600" is "Ready"
	I1210 07:32:50.453713    2240 pod_ready.go:86] duration metric: took 399.6056ms for pod "kube-scheduler-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.453713    2240 pod_ready.go:40] duration metric: took 1.6095401s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:32:50.552047    2240 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 07:32:50.555856    2240 out.go:179] * Done! kubectl is now configured to use "custom-flannel-648600" cluster and "default" namespace by default
	I1210 07:32:48.417490    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.417490    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:48.420482    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:48.463340    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.463340    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:48.466961    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:48.498101    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.498101    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:48.501771    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:48.532099    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.532099    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:48.532099    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:48.532099    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:48.612165    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:48.602526   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.604459   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.605839   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.608058   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.609316   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:48.602526   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.604459   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.605839   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.608058   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.609316   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:48.612165    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:48.612165    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:48.639467    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:48.639467    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:48.708307    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:48.708378    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:48.769132    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:48.769193    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:51.313991    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:51.338965    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:51.379596    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.379666    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:51.384637    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:51.439084    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.439084    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:51.443082    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:51.481339    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.481375    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:51.485798    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:51.515086    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.515086    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:51.519086    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:51.549657    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.549745    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:51.553762    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:51.594636    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.594636    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:51.601112    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:51.634850    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.634897    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:51.638417    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:51.668658    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.668658    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:51.668658    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:51.668658    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:51.743421    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:51.743421    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:51.785980    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:51.785980    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:51.881612    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:51.875319   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.876439   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.877390   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.878342   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.879220   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:51.875319   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.876439   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.877390   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.878342   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.879220   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:51.881612    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:51.881612    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:51.915211    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:51.915211    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:53.781958    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:54.477323    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:54.503322    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:54.543324    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.543324    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:54.547318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:54.584329    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.584329    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:54.588316    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:54.620313    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.620313    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:54.623313    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:54.656331    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.656331    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:54.662335    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:54.698319    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.698319    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:54.702320    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:54.730323    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.730323    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:54.734335    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:54.767329    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.767329    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:54.772326    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:54.807324    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.807324    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:54.807324    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:54.807324    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:54.885116    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:54.885116    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:54.922078    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:54.922078    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:55.025433    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:55.017851   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.018872   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.019791   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.020881   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.021812   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:55.017851   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.018872   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.019791   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.020881   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.021812   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:55.025433    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:55.025433    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:55.062949    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:55.062949    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:57.627400    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:57.652685    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:57.682605    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.682695    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:57.687397    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:57.715588    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.715643    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:57.719155    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:57.746386    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.746433    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:57.751074    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:57.786162    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.786225    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:57.790161    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:57.821543    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.821543    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:57.825865    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:57.854873    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.854873    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:57.858370    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:57.908764    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.908764    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:57.912923    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:57.943110    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.943156    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:57.943156    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:57.943220    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:58.044764    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:58.032727   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.034310   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.035457   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.038242   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.039578   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:58.032727   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.034310   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.035457   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.038242   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.039578   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:58.044764    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:58.044764    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:58.074136    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:58.074136    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:58.130739    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:58.130739    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:58.198319    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:58.198319    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:00.746286    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:00.773024    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:00.801991    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.801991    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:00.806103    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:00.839474    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.839538    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:00.843748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:00.872704    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.872704    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:00.879471    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:00.910099    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.910099    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:00.913675    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:00.942535    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.942587    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:00.946706    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:00.978075    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.978075    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:00.981585    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:01.010831    1436 logs.go:282] 0 containers: []
	W1210 07:33:01.010862    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:01.014542    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:01.046630    1436 logs.go:282] 0 containers: []
	W1210 07:33:01.046630    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:01.046630    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:01.046630    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:01.110794    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:01.110794    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:01.152129    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:01.152129    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:01.244044    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:01.232452   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.233728   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.234672   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.238201   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.239249   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:01.232452   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.233728   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.234672   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.238201   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.239249   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:01.244044    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:01.244044    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:01.278465    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:01.278465    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:33:03.818627    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:03.833114    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:03.855801    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:03.886510    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.886573    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:03.890099    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:03.920839    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.920839    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:03.927061    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:03.956870    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.956870    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:03.960568    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:03.992698    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.992784    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:03.996483    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:04.027029    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.027149    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:04.030240    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:04.063615    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.063615    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:04.067578    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:04.097874    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.097921    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:04.102194    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:04.133751    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.133751    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:04.133751    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:04.133751    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:04.200457    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:04.200457    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:04.240408    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:04.240408    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:04.321404    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:04.310792   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.311874   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.312796   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.314967   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.316599   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:04.310792   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.311874   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.312796   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.314967   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.316599   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:04.321404    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:04.321404    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:04.348691    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:04.348788    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:06.910838    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:06.942433    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:06.977118    1436 logs.go:282] 0 containers: []
	W1210 07:33:06.977156    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:06.981007    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:07.010984    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.010984    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:07.015418    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:07.044766    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.044766    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:07.048710    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:07.081347    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.081347    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:07.085264    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:07.120524    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.120524    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:07.125158    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:07.162231    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.162231    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:07.167511    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:07.199783    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.199783    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:07.203843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:07.237945    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.237945    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:07.237945    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:07.237945    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:07.303014    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:07.303014    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:07.339790    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:07.339790    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:07.433533    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:07.422610   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.423608   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.426487   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.427518   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.428665   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:07.422610   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.423608   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.426487   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.427518   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.428665   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:07.433578    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:07.433622    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:07.463534    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:07.463534    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:10.019483    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:10.042553    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:10.075861    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.075861    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:10.079883    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:10.112806    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.112855    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:10.118076    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:10.149529    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.149529    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:10.154764    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:10.183943    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.183943    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:10.188277    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:10.225075    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.225109    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:10.229148    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:10.258752    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.258831    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:10.262260    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:10.290375    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.290375    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:10.294114    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:10.324184    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.324184    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:10.324184    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:10.324257    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:10.389060    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:10.389060    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:10.428762    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:10.428762    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:10.512419    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:10.502106   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.503175   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.504155   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.507624   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.508833   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:10.502106   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.503175   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.504155   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.507624   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.508833   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:10.512419    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:10.512419    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:10.539151    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:10.539151    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:13.096376    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:13.120463    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:13.154821    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.154821    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:13.158241    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:13.186136    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.186172    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:13.190126    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:13.217850    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.217850    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:13.220856    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:13.254422    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.254422    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:13.258405    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:13.290565    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.290650    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:13.294141    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:13.324205    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.324205    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:13.327944    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:13.359148    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.359148    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:13.363435    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:13.394783    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.394783    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:13.394783    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:13.394783    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:33:13.858746    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:13.472122    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:13.472122    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:13.512554    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:13.512554    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:13.606866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:13.598440   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.599732   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.600869   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.602040   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.603324   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:13.598440   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.599732   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.600869   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.602040   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.603324   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:13.606866    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:13.606866    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:13.640509    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:13.640509    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:16.200969    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:16.227853    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:16.259466    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.259503    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:16.263863    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:16.305661    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.305714    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:16.309344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:16.349702    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.349702    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:16.354239    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:16.389642    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.389669    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:16.393404    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:16.422749    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.422749    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:16.428043    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:16.462871    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.462871    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:16.466863    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:16.500036    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.500036    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:16.505217    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:16.545533    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.545563    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:16.545563    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:16.545640    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:16.616718    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:16.616718    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:16.662358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:16.662414    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:16.771496    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:16.759784   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.760601   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.764427   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.766044   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.767471   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:16.759784   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.760601   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.764427   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.766044   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.767471   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:16.771539    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:16.771539    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:16.802169    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:16.802169    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:19.361839    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:19.384627    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:19.418054    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.418054    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:19.423334    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:19.449315    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.450326    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:19.453336    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:19.479318    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.479318    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:19.483409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:19.515568    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.515568    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:19.518948    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:19.547403    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.547403    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:19.550914    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:19.582586    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.582643    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:19.586506    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:19.617655    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.617655    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:19.621651    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:19.653692    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.653797    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:19.653820    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:19.653820    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:19.720756    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:19.720756    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:19.788168    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:19.788168    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:19.825175    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:19.825175    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:19.937176    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:19.910996   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912147   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912626   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.933700   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.934740   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:19.910996   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912147   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912626   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.933700   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.934740   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:19.938191    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:19.938191    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:22.472081    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:22.499318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:22.535642    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.535642    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:22.540234    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:22.575580    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.575580    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:22.578579    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:22.611585    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.612584    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:22.615587    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:22.645600    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.645600    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:22.649593    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:22.680588    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.680588    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:22.684584    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:22.713587    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.713587    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:22.716592    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:22.745591    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.745591    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:22.748591    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:22.777133    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.777133    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:22.777133    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:22.777133    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:22.866913    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:22.856823   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.858179   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.859339   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860428   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860817   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:22.856823   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.858179   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.859339   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860428   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860817   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:22.866913    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:22.866913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:22.895817    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:22.895817    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:22.963449    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:22.964449    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:23.024022    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:23.024022    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:33:23.891822    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:25.581257    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:25.606450    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:25.638465    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.638465    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:25.641459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:25.675461    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.675461    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:25.678460    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:25.712472    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.712472    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:25.715460    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:25.742469    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.742469    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:25.745459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:25.778468    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.778468    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:25.782466    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:25.810470    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.810470    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:25.813459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:25.842959    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.843962    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:25.846951    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:25.879265    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.879265    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:25.879265    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:25.879265    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:25.923140    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:25.923140    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:26.006825    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:25.994746   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.996044   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.997646   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.998827   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:26.000023   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:25.994746   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.996044   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.997646   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.998827   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:26.000023   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:26.006825    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:26.006825    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:26.036172    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:26.036172    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:26.088180    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:26.088180    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:28.665087    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:28.689823    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:28.725678    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.725714    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:28.728663    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:28.759105    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.759146    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:28.763209    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:28.794743    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.794743    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:28.798927    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:28.832979    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.832979    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:28.836972    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:28.869676    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.869676    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:28.874394    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:28.909690    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.909690    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:28.914703    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:28.948685    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.948685    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:28.951687    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:28.983688    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.983688    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:28.983688    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:28.983688    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:29.038702    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:29.038702    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:29.102687    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:29.102687    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:29.157695    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:29.157695    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:29.254070    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:29.238189   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.239080   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.244117   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.246991   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.248197   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:29.238189   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.239080   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.244117   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.246991   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.248197   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:29.254070    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:29.254070    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:31.790873    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:31.815324    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:31.848719    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.848719    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:31.853126    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:31.894569    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.894618    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:31.901660    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:31.945924    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.945924    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:31.949930    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:31.980922    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.980922    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:31.983920    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:32.015920    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.015920    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:32.018924    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:32.055014    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.055014    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:32.059907    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:32.088299    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.088299    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:32.091301    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:32.122373    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.122373    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:32.122373    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:32.122373    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:32.200241    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:32.200241    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:32.235857    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:32.236857    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:32.346052    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:32.333533   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.334563   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.336148   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.337055   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.339822   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:32.333533   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.334563   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.336148   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.337055   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.339822   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:32.346052    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:32.346052    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:32.374360    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:32.374360    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:33:33.924414    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:34.931799    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:34.953865    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:34.983147    1436 logs.go:282] 0 containers: []
	W1210 07:33:34.983147    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:34.986833    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:35.017888    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.017888    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:35.021662    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:35.051231    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.051231    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:35.055612    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:35.089316    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.089316    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:35.093193    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:35.121682    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.121682    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:35.126091    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:35.158874    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.158874    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:35.165874    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:35.201117    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.201117    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:35.206353    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:35.236228    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.236228    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:35.236228    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:35.236228    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:35.267932    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:35.267994    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:35.320951    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:35.320951    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:35.383537    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:35.383589    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:35.425468    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:35.425468    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:35.528144    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:35.516901   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.517862   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.520128   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.521034   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.522130   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:35.516901   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.517862   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.520128   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.521034   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.522130   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:38.032492    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:38.054909    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:38.083957    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.083957    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:38.087695    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:38.116008    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.116008    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:38.121353    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:38.151236    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.151236    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:38.157561    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:38.191692    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.191739    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:38.195638    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:38.232952    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.232952    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:38.240283    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:38.267392    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.267392    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:38.270392    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:38.302982    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.302982    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:38.306527    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:38.337370    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.337370    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:38.337663    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:38.337663    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:38.378149    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:38.378149    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:38.496679    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:38.485115   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.486129   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.488286   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.489938   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.491114   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:38.485115   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.486129   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.488286   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.489938   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.491114   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:38.496679    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:38.496679    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:38.523508    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:38.524031    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:38.575827    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:38.575926    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:41.142591    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:41.169193    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:41.202128    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.202197    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:41.205840    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:41.232108    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.232108    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:41.236042    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:41.266240    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.266240    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:41.270256    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:41.299391    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.299914    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:41.305198    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:41.334815    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.334888    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:41.338221    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:41.366830    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.366830    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:41.371846    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:41.403239    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.403307    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:41.406504    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:41.435444    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.435507    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:41.435507    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:41.435507    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:41.495280    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:41.495280    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:41.540098    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:41.540098    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:41.631123    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:41.619480   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.620700   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.621495   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.624397   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.626223   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:41.619480   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.620700   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.621495   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.624397   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.626223   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:41.631123    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:41.631123    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:41.659481    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:41.660004    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:33:43.958857    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:44.218114    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:44.245684    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:44.277948    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.277948    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:44.281784    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:44.308191    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.308236    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:44.311628    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:44.338002    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.338064    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:44.341334    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:44.369051    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.369051    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:44.373446    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:44.401355    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.401355    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:44.404625    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:44.435928    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.436021    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:44.438720    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:44.468518    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.468518    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:44.472419    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:44.505185    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.505185    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:44.505185    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:44.505185    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:44.542000    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:44.542000    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:44.637866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:44.628159   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.629590   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.630676   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.631884   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.633066   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:44.628159   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.629590   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.630676   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.631884   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.633066   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:44.637866    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:44.637866    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:44.668149    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:44.668149    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:44.722118    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:44.722118    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:47.287165    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:47.315701    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:47.348691    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.348691    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:47.352599    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:47.382757    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.382757    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:47.386956    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:47.416756    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.416756    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:47.420505    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:47.447567    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.447631    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:47.451327    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:47.481198    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.481198    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:47.484905    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:47.515752    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.515752    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:47.519521    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:47.549878    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.549878    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:47.553160    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:47.580738    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.580738    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:47.580738    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:47.580738    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:47.620996    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:47.620996    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:47.717751    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:47.708186   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.709887   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.710982   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.712203   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.713847   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:47.708186   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.709887   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.710982   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.712203   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.713847   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:47.717751    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:47.717751    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:47.747052    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:47.747052    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:47.806827    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:47.806907    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:50.374572    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:50.402608    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:50.434845    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.434845    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:50.439264    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:50.472884    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.472884    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:50.476675    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:50.506875    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.506875    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:50.510516    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:50.544104    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.544104    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:50.547823    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:50.582563    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.582563    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:50.586716    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:50.617520    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.617520    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:50.621651    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:50.654870    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.654924    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:50.658739    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:50.687650    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.687650    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:50.687650    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:50.687650    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:50.741903    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:50.741970    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:50.801979    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:50.801979    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:50.841061    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:50.841061    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:50.929313    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:50.919053   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.920402   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.921292   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.924075   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.925956   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:50.919053   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.920402   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.921292   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.924075   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.925956   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:50.929313    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:50.929313    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1210 07:33:53.996838    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:53.461932    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:53.489152    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:53.525676    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.525676    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:53.529484    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:53.564410    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.564438    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:53.567827    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:53.614175    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.614215    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:53.620260    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:53.655138    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.655138    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:53.659487    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:53.692591    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.692591    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:53.696809    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:53.736843    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.736843    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:53.741782    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:53.770910    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.770910    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:53.775145    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:53.805756    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.805756    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:53.805756    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:53.805756    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:53.868923    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:53.868923    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:53.909599    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:53.909599    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:53.994728    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:53.985324   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.986526   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.987499   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.989286   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.991204   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:53.985324   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.986526   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.987499   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.989286   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.991204   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:53.994728    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:53.994728    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:54.023183    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:54.023245    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:56.581055    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:56.606311    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:56.640781    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.640781    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:56.645032    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:56.673780    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.673780    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:56.680498    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:56.708843    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.708843    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:56.711839    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:56.743689    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.743689    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:56.747149    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:56.776428    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.776490    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:56.780173    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:56.810171    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.810171    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:56.815860    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:56.843104    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.843150    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:56.846843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:56.875180    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.875180    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:56.875180    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:56.875260    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:56.937905    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:56.937905    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:56.978984    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:56.978984    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:57.072981    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:57.057332   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.058272   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.063163   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.064098   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.066478   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:57.057332   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.058272   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.063163   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.064098   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.066478   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:57.072981    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:57.072981    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:57.103275    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:57.103275    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:59.657150    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:59.680473    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:59.717538    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.717538    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:59.721115    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:59.750445    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.750445    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:59.754192    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:59.783080    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.783609    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:59.786966    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:59.815381    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.815381    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:59.818634    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:59.846978    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.847073    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:59.850723    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:59.881504    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.881531    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:59.885538    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:59.912091    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.912091    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:59.915555    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:59.945836    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.945836    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:59.945836    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:59.945918    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:00.010932    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:00.010932    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:00.050450    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:00.050450    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:00.135132    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:00.122005   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.123035   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.124724   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.126083   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.127122   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:00.122005   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.123035   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.124724   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.126083   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.127122   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:00.135132    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:00.135132    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:00.162951    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:00.162951    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:02.722322    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:02.747735    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:02.782353    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.782423    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:02.785942    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:02.815562    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.815562    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:02.819580    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:02.851940    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.851940    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:02.855858    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:02.883743    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.883743    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:02.887230    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:02.919540    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.919540    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:02.923123    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:02.951385    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.951439    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:02.955922    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:02.985112    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.985172    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:02.988380    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:03.020559    1436 logs.go:282] 0 containers: []
	W1210 07:34:03.020590    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:03.020590    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:03.020643    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:03.113834    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:03.100874   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.101891   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.104988   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.106378   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.107577   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:03.100874   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.101891   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.104988   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.106378   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.107577   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:03.113834    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:03.113834    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:03.143434    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:03.143494    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:03.195505    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:03.195505    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:03.260582    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:03.260582    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:34:04.034666    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:34:05.805687    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:05.830820    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:05.867098    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.867098    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:05.870201    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:05.902724    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.902724    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:05.906452    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:05.937581    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.937660    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:05.941081    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:05.970812    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.970812    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:05.974826    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:06.005319    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.005319    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:06.009298    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:06.036331    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.036367    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:06.040396    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:06.070470    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.070522    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:06.073716    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:06.105829    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.105902    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:06.105902    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:06.105902    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:06.168761    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:06.168761    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:06.209503    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:06.209503    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:06.300233    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:06.287348   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.288220   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.291316   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.292659   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.293382   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:06.287348   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.288220   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.291316   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.292659   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.293382   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:06.300233    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:06.300233    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:06.325856    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:06.326404    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:34:12.432519    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1210 07:34:12.432519    6044 node_ready.go:38] duration metric: took 6m0.0003472s for node "no-preload-099700" to be "Ready" ...
	I1210 07:34:12.435520    6044 out.go:203] 
	W1210 07:34:12.437521    6044 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:34:12.437521    6044 out.go:285] * 
	W1210 07:34:12.439520    6044 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:34:12.443519    6044 out.go:203] 
	I1210 07:34:08.888339    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:08.915007    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:08.945370    1436 logs.go:282] 0 containers: []
	W1210 07:34:08.945370    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:08.948912    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:08.978717    1436 logs.go:282] 0 containers: []
	W1210 07:34:08.978744    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:08.982191    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:09.014137    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.014137    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:09.019817    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:09.049527    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.049527    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:09.053402    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:09.083494    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.083519    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:09.087029    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:09.115269    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.115306    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:09.117873    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:09.155291    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.155351    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:09.159388    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:09.189238    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.189238    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:09.189238    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:09.189238    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:09.276866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:09.264194   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.265301   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.267799   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269023   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269943   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:09.264194   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.265301   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.267799   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269023   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269943   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:09.276924    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:09.276924    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:09.303083    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:09.303603    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:09.350941    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:09.350941    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:09.414406    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:09.414406    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:11.970539    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:11.997446    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:12.029543    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.029543    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:12.033746    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:12.061992    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.061992    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:12.066520    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:12.095801    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.095801    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:12.099364    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:12.129880    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.129949    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:12.133782    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:12.162555    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.162555    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:12.167228    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:12.196229    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.196229    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:12.200137    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:12.226729    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.226729    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:12.230279    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:12.255730    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.255730    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:12.255730    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:12.255730    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:12.318642    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:12.318642    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:12.364065    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:12.364065    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:12.469524    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:12.459675   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.460966   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.462240   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.463300   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.464338   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:12.459675   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.460966   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.462240   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.463300   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.464338   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:12.469574    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:12.469574    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:12.496807    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:12.496950    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:15.052930    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:15.080623    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:15.117403    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.117403    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:15.120370    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:15.147363    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.148371    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:15.151363    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:15.180365    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.180365    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:15.183366    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:15.215366    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.215366    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:15.218364    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:15.247369    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.247369    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:15.251365    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:15.283373    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.283373    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:15.286369    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:15.314370    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.314370    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:15.317368    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:15.347380    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.347380    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:15.347380    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:15.347380    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:15.421369    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:15.421369    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:15.458368    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:15.458368    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:15.566221    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:15.551230   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.552488   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.553348   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.556086   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.557771   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:15.551230   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.552488   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.553348   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.556086   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.557771   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:15.566279    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:15.566338    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:15.605803    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:15.605803    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:18.163754    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:18.197669    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:18.254543    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.254543    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:18.260541    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:18.293062    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.293062    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:18.296833    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:18.327885    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.327968    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:18.331280    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:18.368942    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.368942    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:18.372299    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:18.400463    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.400463    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:18.405006    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:18.446334    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.446379    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:18.449958    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:18.478295    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.478381    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:18.482123    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:18.510432    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.510506    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:18.510548    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:18.510548    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:18.572862    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:18.572862    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:18.614127    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:18.614127    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:18.702730    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:18.692245   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.693386   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.694454   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.697285   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.699129   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:18.692245   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.693386   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.694454   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.697285   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.699129   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:18.702730    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:18.702730    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:18.729639    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:18.729639    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:21.289931    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:21.315099    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:21.349129    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.349129    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:21.352917    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:21.385897    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.386013    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:21.389207    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:21.439847    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.439847    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:21.444868    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:21.473011    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.473011    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:21.476938    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:21.503941    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.503983    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:21.507954    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:21.536377    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.536377    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:21.540123    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:21.571714    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.571714    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:21.575681    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:21.605581    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.605581    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:21.605581    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:21.605581    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:21.633565    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:21.633565    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:21.687271    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:21.687271    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:21.750102    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:21.750102    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:21.792165    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:21.792165    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:21.885403    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:21.874829   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876021   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876953   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.879461   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.880406   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:21.874829   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876021   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876953   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.879461   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.880406   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:24.393597    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:24.420363    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:24.450891    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.450891    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:24.454037    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:24.483407    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.483407    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:24.489862    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:24.517830    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.517830    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:24.521711    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:24.549403    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.549403    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:24.553551    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:24.580367    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.580367    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:24.584748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:24.612646    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.612646    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:24.616710    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:24.647684    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.647753    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:24.651184    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:24.679053    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.679053    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:24.679053    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:24.679053    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:24.768115    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:24.758247   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.759411   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.760423   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.761390   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.762221   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:24.758247   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.759411   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.760423   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.761390   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.762221   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:24.768115    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:24.768115    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:24.795167    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:24.795201    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:24.844459    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:24.844459    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:24.907171    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:24.907171    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:27.453205    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:27.478026    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:27.513249    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.513249    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:27.517125    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:27.547733    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.547733    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:27.551680    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:27.577736    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.577736    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:27.581469    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:27.612483    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.612483    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:27.616434    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:27.644895    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.644895    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:27.650606    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:27.678273    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.678273    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:27.681744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:27.708604    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.708604    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:27.712244    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:27.742726    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.742726    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:27.742726    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:27.742726    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:27.807570    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:27.807570    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:27.846722    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:27.846722    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:27.929641    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:27.919463   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.920475   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.921726   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.922614   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.924717   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:27.919463   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.920475   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.921726   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.922614   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.924717   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:27.929641    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:27.929641    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:27.956087    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:27.956087    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:30.506646    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:30.530148    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:30.563444    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.563444    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:30.567219    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:30.596843    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.596843    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:30.600803    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:30.628947    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.628947    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:30.632665    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:30.663325    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.663369    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:30.667341    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:30.695640    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.695640    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:30.699545    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:30.728310    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.728310    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:30.731899    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:30.758598    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.758598    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:30.763285    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:30.792051    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.792051    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:30.792051    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:30.792051    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:30.830219    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:30.830219    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:30.919635    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:30.909299   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.910353   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.912393   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.914543   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.915506   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:30.909299   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.910353   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.912393   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.914543   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.915506   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:30.919635    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:30.919635    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:30.949360    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:30.949360    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:30.997435    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:30.997435    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:33.565782    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:33.590543    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:33.623936    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.623936    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:33.629607    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:33.664589    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.664673    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:33.668215    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:33.698892    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.698892    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:33.702344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:33.733428    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.733428    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:33.737226    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:33.764873    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.764873    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:33.768422    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:33.800350    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.800350    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:33.804811    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:33.836711    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.836711    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:33.840164    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:33.869248    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.869333    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:33.869333    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:33.869333    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:33.932626    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:33.933627    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:33.974227    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:33.974227    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:34.066031    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:34.054849   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.056230   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.057835   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.058730   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.060848   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:34.054849   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.056230   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.057835   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.058730   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.060848   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:34.066031    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:34.066031    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:34.092765    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:34.092765    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:36.652871    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:36.677531    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:36.712608    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.712608    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:36.718832    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:36.748298    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.748298    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:36.751762    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:36.783390    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.783403    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:36.787051    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:36.815730    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.815766    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:36.819100    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:36.848875    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.848875    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:36.852925    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:36.886657    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.886657    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:36.890808    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:36.920858    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.920858    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:36.924583    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:36.955882    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.955960    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:36.956001    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:36.956001    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:37.021848    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:37.021848    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:37.060744    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:37.060744    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:37.154895    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:37.142691   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.143928   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.145557   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.147806   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.149984   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:37.142691   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.143928   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.145557   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.147806   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.149984   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:37.154895    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:37.154895    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:37.182385    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:37.182385    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:39.737032    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:39.762115    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:39.792900    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.792900    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:39.797014    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:39.825423    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.825455    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:39.829352    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:39.856679    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.856679    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:39.860615    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:39.891351    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.891351    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:39.895346    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:39.924351    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.924351    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:39.928531    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:39.956447    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.956447    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:39.961810    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:39.987792    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.987792    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:39.991127    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:40.018614    1436 logs.go:282] 0 containers: []
	W1210 07:34:40.018614    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:40.018614    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:40.018614    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:40.082378    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:40.082378    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:40.123506    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:40.123506    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:40.208266    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:40.199944   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201027   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201868   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.204245   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.205189   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:40.199944   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201027   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201868   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.204245   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.205189   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:40.209272    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:40.209272    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:40.239017    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:40.239017    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:42.793527    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:42.818084    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:42.852095    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.852095    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:42.855685    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:42.883269    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.883269    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:42.887287    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:42.918719    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.918800    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:42.923828    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:42.950663    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.950663    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:42.956319    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:42.985991    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.985991    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:42.989729    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:43.017767    1436 logs.go:282] 0 containers: []
	W1210 07:34:43.017824    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:43.021689    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:43.048180    1436 logs.go:282] 0 containers: []
	W1210 07:34:43.048180    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:43.052257    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:43.081092    1436 logs.go:282] 0 containers: []
	W1210 07:34:43.081160    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:43.081183    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:43.081217    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:43.174944    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:43.162932   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.166268   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.169191   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.170321   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.171500   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:43.162932   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.166268   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.169191   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.170321   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.171500   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:43.174992    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:43.174992    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:43.202288    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:43.202807    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:43.249217    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:43.249217    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:43.311267    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:43.311267    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:45.857003    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:45.881743    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:45.911856    1436 logs.go:282] 0 containers: []
	W1210 07:34:45.911856    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:45.915335    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:45.945613    1436 logs.go:282] 0 containers: []
	W1210 07:34:45.945613    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:45.949134    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:45.977768    1436 logs.go:282] 0 containers: []
	W1210 07:34:45.977768    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:45.982182    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:46.010859    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.010859    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:46.014603    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:46.043489    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.043531    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:46.047198    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:46.080651    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.080685    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:46.084319    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:46.116705    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.116780    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:46.121508    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:46.154299    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.154299    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:46.154299    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:46.154299    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:46.222546    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:46.222546    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:46.262468    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:46.262468    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:46.349894    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:46.340418   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.341659   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.342932   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.344391   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.345361   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:46.340418   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.341659   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.342932   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.344391   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.345361   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:46.349894    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:46.349894    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:46.376804    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:46.376804    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:48.931982    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:48.957769    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:48.990182    1436 logs.go:282] 0 containers: []
	W1210 07:34:48.990182    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:48.994255    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:49.021913    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.021913    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:49.026344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:49.054704    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.054704    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:49.058471    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:49.089507    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.089559    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:49.093804    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:49.121462    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.121462    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:49.125755    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:49.156174    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.156174    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:49.160707    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:49.190933    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.190933    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:49.194771    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:49.220610    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.220610    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:49.220610    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:49.220610    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:49.283897    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:49.283897    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:49.324154    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:49.324154    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:49.412165    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:49.404459   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.405604   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.407007   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.408149   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.409161   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:49.404459   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.405604   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.407007   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.408149   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.409161   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:49.412165    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:49.413146    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:49.440045    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:49.440045    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:52.013495    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:52.044149    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:52.080205    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.080205    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:52.084762    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:52.115105    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.115105    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:52.119720    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:52.149672    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.149672    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:52.153985    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:52.186711    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.186711    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:52.192181    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:52.217751    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.217751    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:52.221590    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:52.250827    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.250876    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:52.254668    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:52.284643    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.284643    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:52.288811    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:52.316628    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.316707    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:52.316707    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:52.316707    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:52.348325    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:52.348325    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:52.408110    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:52.408110    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:52.471268    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:52.471268    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:52.511512    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:52.511512    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:52.594976    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:52.587009   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.588398   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.589811   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.591970   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.593048   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:52.587009   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.588398   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.589811   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.591970   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.593048   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:55.100294    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:55.126530    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:55.160945    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.160945    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:55.164755    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:55.196407    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.196407    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:55.199994    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:55.229174    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.229174    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:55.232898    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:55.265856    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.265856    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:55.268892    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:55.302098    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.302121    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:55.305590    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:55.335754    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.335754    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:55.339583    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:55.368170    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.368251    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:55.372008    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:55.397576    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.397576    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:55.397576    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:55.397576    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:55.434345    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:55.434345    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:55.528958    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:55.516781   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.517755   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.519593   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.520640   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.521612   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:55.516781   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.517755   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.519593   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.520640   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.521612   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:55.528958    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:55.528958    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:55.555805    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:55.555805    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:55.602232    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:55.602232    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:58.169858    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:58.195497    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:58.226557    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.226588    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:58.229677    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:58.260817    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.260817    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:58.265378    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:58.293848    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.293920    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:58.297406    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:58.326737    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.326737    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:58.330307    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:58.357319    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.357407    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:58.360727    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:58.392361    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.392405    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:58.395697    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:58.425728    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.425807    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:58.429369    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:58.457816    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.457866    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:58.457866    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:58.457866    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:58.495777    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:58.495777    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:58.585489    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:58.573271   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.574154   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.576361   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.577165   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.579860   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:58.573271   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.574154   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.576361   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.577165   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.579860   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:58.585489    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:58.585489    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:58.613007    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:58.613007    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:58.661382    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:58.661382    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:01.230900    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:01.255356    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:01.292137    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.292190    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:01.297192    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:01.328372    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.328372    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:01.332239    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:01.360635    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.360635    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:01.364529    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:01.391175    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.391175    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:01.394754    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:01.423093    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.423093    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:01.427022    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:01.454965    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.454965    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:01.459137    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:01.487734    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.487734    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:01.492051    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:01.518150    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.518150    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:01.518150    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:01.518150    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:01.580940    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:01.580940    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:01.620363    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:01.620363    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:01.710696    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:01.700163   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.701113   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.703089   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.704462   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.705476   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:01.700163   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.701113   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.703089   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.704462   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.705476   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:01.710696    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:01.710696    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:01.736867    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:01.736867    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:04.295439    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:04.322348    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:04.356895    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.356919    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:04.361858    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:04.396943    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.397019    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:04.401065    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:04.431929    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.431929    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:04.436798    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:04.468073    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.468073    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:04.472528    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:04.503230    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.503230    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:04.506632    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:04.540016    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.540016    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:04.543627    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:04.576446    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.576446    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:04.583292    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:04.611475    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.611542    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:04.611542    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:04.611542    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:04.640376    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:04.640433    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:04.695309    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:04.695309    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:04.756418    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:04.756418    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:04.795089    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:04.795089    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:04.891481    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:04.878108   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.880090   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.883096   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.885167   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.886541   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:04.878108   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.880090   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.883096   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.885167   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.886541   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:07.396688    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:07.422837    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:07.454807    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.454807    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:07.459071    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:07.489720    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.489720    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:07.493466    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:07.519982    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.519982    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:07.523858    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:07.552985    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.552985    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:07.556972    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:07.589709    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.589709    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:07.593709    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:07.621519    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.621519    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:07.625151    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:07.654324    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.654404    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:07.657279    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:07.690913    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.690966    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:07.690988    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:07.690988    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:07.757157    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:07.757157    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:07.796333    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:07.796333    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:07.893954    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:07.881331   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.882766   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.885657   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887077   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887623   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:07.881331   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.882766   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.885657   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887077   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887623   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:07.893954    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:07.893954    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:07.943452    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:07.943452    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:10.496562    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:10.522517    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:10.555517    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.555517    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:10.560160    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:10.591257    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.591306    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:10.594925    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:10.623075    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.623075    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:10.626725    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:10.654115    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.654115    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:10.658014    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:10.689683    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.689683    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:10.693386    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:10.721754    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.721754    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:10.725087    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:10.753052    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.753052    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:10.756926    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:10.787466    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.787466    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:10.787466    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:10.787466    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:10.882563    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:10.873740   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.874902   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.876114   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.877091   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.878349   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:10.873740   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.874902   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.876114   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.877091   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.878349   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:10.882563    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:10.882563    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:10.944299    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:10.944299    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:10.993835    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:10.993835    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:11.053114    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:11.053114    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:13.597304    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:13.621417    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:13.653723    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.653842    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:13.657020    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:13.690175    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.690175    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:13.693954    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:13.723350    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.723350    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:13.728514    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:13.757179    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.757179    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:13.765645    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:13.794387    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.794473    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:13.798130    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:13.826937    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.826937    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:13.830895    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:13.865171    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.865171    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:13.869540    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:13.899920    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.899920    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:13.899920    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:13.899920    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:13.964338    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:13.964338    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:14.028584    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:14.028584    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:14.067840    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:14.067840    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:14.154123    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:14.144490   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.145615   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.146725   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.148037   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.149069   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:14.144490   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.145615   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.146725   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.148037   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.149069   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:14.154123    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:14.154123    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:16.685726    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:16.716822    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:16.753764    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.753827    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:16.757211    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:16.789634    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.789634    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:16.793640    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:16.822677    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.822728    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:16.826522    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:16.853660    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.853660    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:16.858461    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:16.887452    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.887504    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:16.893014    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:16.939344    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.939344    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:16.943118    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:16.971703    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.971781    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:16.974884    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:17.003517    1436 logs.go:282] 0 containers: []
	W1210 07:35:17.003595    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:17.003595    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:17.003595    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:17.088355    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:17.079526   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.080729   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.081812   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.083165   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.084419   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:17.079526   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.080729   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.081812   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.083165   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.084419   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:17.088355    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:17.088355    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:17.117181    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:17.117241    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:17.168070    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:17.168155    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:17.231584    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:17.231584    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:19.776112    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:19.801640    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:19.835886    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.835886    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:19.839626    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:19.872127    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.872127    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:19.876526    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:19.929339    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.929339    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:19.933522    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:19.962400    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.962400    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:19.966133    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:19.994468    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.994544    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:19.998645    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:20.027252    1436 logs.go:282] 0 containers: []
	W1210 07:35:20.027252    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:20.032575    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:20.060153    1436 logs.go:282] 0 containers: []
	W1210 07:35:20.060153    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:20.065171    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:20.091891    1436 logs.go:282] 0 containers: []
	W1210 07:35:20.091891    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:20.091891    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:20.091891    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:20.131103    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:20.131103    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:20.218614    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:20.208033   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.209212   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.210215   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214139   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214965   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:20.208033   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.209212   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.210215   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214139   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214965   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:20.218614    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:20.219146    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:20.245788    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:20.245788    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:20.298111    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:20.298207    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:22.861878    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:22.887649    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:22.922573    1436 logs.go:282] 0 containers: []
	W1210 07:35:22.922573    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:22.926179    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:22.959170    1436 logs.go:282] 0 containers: []
	W1210 07:35:22.959197    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:22.963338    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:22.994510    1436 logs.go:282] 0 containers: []
	W1210 07:35:22.994566    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:22.997861    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:23.029960    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.030036    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:23.033513    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:23.064625    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.064625    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:23.069769    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:23.101906    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.101943    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:23.105651    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:23.136615    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.136615    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:23.140616    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:23.170857    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.170942    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:23.170942    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:23.170942    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:23.233098    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:23.233098    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:23.273238    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:23.273238    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:23.361638    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:23.352696   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.354050   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.356707   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.357782   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.358807   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:23.352696   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.354050   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.356707   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.357782   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.358807   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:23.361638    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:23.361638    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:23.390711    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:23.391230    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:25.949809    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:25.975470    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:26.007496    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.007496    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:26.011469    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:26.044617    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.044617    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:26.048311    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:26.078756    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.078783    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:26.082359    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:26.112113    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.112183    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:26.115713    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:26.148097    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.148097    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:26.151926    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:26.182729    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.182753    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:26.186743    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:26.217219    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.217219    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:26.223773    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:26.251643    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.251713    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:26.251713    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:26.251713    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:26.278698    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:26.278698    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:26.332014    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:26.332014    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:26.394304    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:26.394304    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:26.433073    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:26.433073    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:26.519395    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:26.506069   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.507354   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.509591   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.512516   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.514125   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:26.506069   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.507354   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.509591   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.512516   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.514125   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:29.024398    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:29.049372    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:29.084989    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.085019    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:29.089078    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:29.116420    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.116420    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:29.120531    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:29.149880    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.149880    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:29.153505    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:29.181726    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.181790    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:29.185295    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:29.216713    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.216713    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:29.222568    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:29.249487    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.249487    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:29.253512    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:29.283473    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.283497    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:29.287061    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:29.313225    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.313225    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:29.313225    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:29.313225    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:29.399665    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:29.386954   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.388181   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.390621   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.391811   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.393167   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:29.386954   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.388181   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.390621   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.391811   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.393167   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:29.399665    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:29.399665    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:29.428593    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:29.428593    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:29.477815    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:29.477877    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:29.541874    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:29.541874    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:32.087876    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:32.113456    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:32.145773    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.145805    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:32.149787    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:32.178912    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.178987    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:32.182700    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:32.213301    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.213301    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:32.217129    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:32.246756    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.246824    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:32.250299    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:32.278791    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.278835    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:32.282397    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:32.316208    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.316278    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:32.320233    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:32.349155    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.349155    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:32.352807    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:32.386875    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.386875    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:32.386944    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:32.386944    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:32.479781    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:32.469750   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.470693   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.473307   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.474321   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.475302   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:32.469750   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.470693   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.473307   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.474321   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.475302   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:32.479781    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:32.479781    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:32.506994    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:32.506994    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:32.561757    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:32.561757    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:32.624545    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:32.624545    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:35.176040    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:35.201056    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:35.235735    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.235735    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:35.239655    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:35.267349    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.267416    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:35.270515    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:35.303264    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.303264    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:35.306371    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:35.339037    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.339263    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:35.343297    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:35.375639    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.375639    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:35.379647    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:35.407670    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.407670    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:35.411506    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:35.446240    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.446240    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:35.450265    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:35.477814    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.477814    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:35.477814    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:35.477814    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:35.541174    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:35.541174    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:35.581633    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:35.581633    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:35.673254    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:35.664165   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.665345   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.666190   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.668510   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.669467   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:35.664165   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.665345   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.666190   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.668510   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.669467   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:35.673254    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:35.673254    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:35.701200    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:35.701200    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:38.255869    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:38.281759    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:38.316123    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.316123    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:38.319358    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:38.348903    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.348943    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:38.352900    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:38.381759    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.381795    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:38.385361    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:38.414524    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.414586    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:38.417710    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:38.447131    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.447205    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:38.451100    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:38.479508    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.479543    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:38.483003    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:38.512848    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.512848    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:38.516967    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:38.547680    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.547680    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:38.547680    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:38.547680    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:38.614038    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:38.614038    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:38.658448    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:38.658448    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:38.743054    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:38.733038   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.734073   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.735791   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.738099   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.739595   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:38.733038   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.734073   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.735791   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.738099   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.739595   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:38.743054    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:38.743054    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:38.775152    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:38.775214    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:41.333835    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:41.358081    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:41.393471    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.393471    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:41.396774    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:41.425173    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.425224    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:41.428523    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:41.456663    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.456663    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:41.459654    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:41.490212    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.490212    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:41.493250    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:41.523505    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.523505    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:41.527006    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:41.555529    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.555529    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:41.559605    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:41.590913    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.591011    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:41.596392    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:41.627361    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.627421    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:41.627441    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:41.627538    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:41.692948    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:41.692948    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:41.731909    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:41.731909    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:41.816121    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:41.806508   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.807705   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.808985   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.810306   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.811462   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:41.806508   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.807705   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.808985   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.810306   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.811462   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:41.816121    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:41.816121    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:41.844622    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:41.844622    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:44.401865    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:44.426294    1436 out.go:203] 
	W1210 07:35:44.428631    1436 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1210 07:35:44.428631    1436 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1210 07:35:44.428631    1436 out.go:285] * Related issues:
	W1210 07:35:44.428631    1436 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1210 07:35:44.428631    1436 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1210 07:35:44.430629    1436 out.go:203] 
	
	
	==> Docker <==
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794207271Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794291179Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794301480Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794308081Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794314981Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794339784Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794382688Z" level=info msg="Initializing buildkit"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.916550520Z" level=info msg="Completed buildkit initialization"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.923562810Z" level=info msg="Daemon has completed initialization"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.923807334Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.923950448Z" level=info msg="API listen on [::]:2376"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.923820636Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 07:28:08 no-preload-099700 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 07:28:09 no-preload-099700 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Loaded network plugin cni"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 07:28:09 no-preload-099700 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:43:18.946130   17283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:43:18.947352   17283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:43:18.948941   17283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:43:18.950585   17283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:43:18.952162   17283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +7.347496] CPU: 6 PID: 490841 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fe73ddc4b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7fe73ddc4af6.
	[  +0.000000] RSP: 002b:00007ffc57a05a90 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000000] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.867258] CPU: 5 PID: 491006 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f1a7acb4b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f1a7acb4af6.
	[  +0.000001] RSP: 002b:00007ffe19029200 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 07:32] tmpfs: Unknown parameter 'noswap'
	[ +15.541609] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 07:43:19 up  3:11,  0 user,  load average: 0.54, 1.12, 2.82
	Linux no-preload-099700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:43:16 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:43:16 no-preload-099700 kubelet[17094]: E1210 07:43:16.111582   17094 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:43:16 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:43:16 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:43:16 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1204.
	Dec 10 07:43:16 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:43:16 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:43:16 no-preload-099700 kubelet[17112]: E1210 07:43:16.863999   17112 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:43:16 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:43:16 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:43:17 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1205.
	Dec 10 07:43:17 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:43:17 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:43:17 no-preload-099700 kubelet[17145]: E1210 07:43:17.609217   17145 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:43:17 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:43:17 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:43:18 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1206.
	Dec 10 07:43:18 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:43:18 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:43:18 no-preload-099700 kubelet[17160]: E1210 07:43:18.380610   17160 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:43:18 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:43:18 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:43:19 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1207.
	Dec 10 07:43:19 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:43:19 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-099700 -n no-preload-099700
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-099700 -n no-preload-099700: exit status 2 (599.5316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-099700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (12.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-525200 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-525200 -n newest-cni-525200
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-525200 -n newest-cni-525200: exit status 2 (583.7141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-525200 -n newest-cni-525200
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-525200 -n newest-cni-525200: exit status 2 (610.9545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-525200 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-525200 -n newest-cni-525200
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-525200 -n newest-cni-525200: exit status 2 (577.4224ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-525200 -n newest-cni-525200
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-525200 -n newest-cni-525200: exit status 2 (589.1374ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-525200
helpers_test.go:244: (dbg) docker inspect newest-cni-525200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188",
	        "Created": "2025-12-10T07:18:58.277037255Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 463220,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:29:29.73179662Z",
	            "FinishedAt": "2025-12-10T07:29:26.920141661Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188/hostname",
	        "HostsPath": "/var/lib/docker/containers/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188/hosts",
	        "LogPath": "/var/lib/docker/containers/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188-json.log",
	        "Name": "/newest-cni-525200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-525200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-525200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7e31d32294dc0b8b9ba09ebc0adce95d5ecde79d96faff02ddfcca2df2a49118-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7e31d32294dc0b8b9ba09ebc0adce95d5ecde79d96faff02ddfcca2df2a49118/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7e31d32294dc0b8b9ba09ebc0adce95d5ecde79d96faff02ddfcca2df2a49118/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7e31d32294dc0b8b9ba09ebc0adce95d5ecde79d96faff02ddfcca2df2a49118/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-525200",
	                "Source": "/var/lib/docker/volumes/newest-cni-525200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-525200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-525200",
	                "name.minikube.sigs.k8s.io": "newest-cni-525200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c6405ded628bc55282f5002d4bd683ef72ad68a142c14324a7fe852f16eb1d8f",
	            "SandboxKey": "/var/run/docker/netns/c6405ded628b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57760"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57761"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57762"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57763"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57764"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-525200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.121.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:79:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e73cdc5fd1be9396722947f498060ee7b5757251a78043b99e30abfea0ec658b",
	                    "EndpointID": "bf76bc1596f8833f7b9c83f8bb2261128b3871775b4118fe4c99fcdac5e453d3",
	                    "Gateway": "192.168.121.1",
	                    "IPAddress": "192.168.121.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-525200",
	                        "6b7f9063cbda"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-525200 -n newest-cni-525200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-525200 -n newest-cni-525200: exit status 2 (583.9057ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-525200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-525200 logs -n 25: (1.4794511s)
E1210 07:35:57.082430   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:35:57.090426   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:35:57.103427   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:35:57.127433   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:35:57.170650   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:35:57.252097   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                          ARGS                                          │        PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-648600 sudo journalctl -xeu kubelet --all --full --no-pager          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/kubernetes/kubelet.conf                         │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /var/lib/kubelet/config.yaml                         │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status docker --all --full --no-pager          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat docker --no-pager                          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/docker/daemon.json                              │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo docker system info                                       │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status cri-docker --all --full --no-pager      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat cri-docker --no-pager                      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /usr/lib/systemd/system/cri-docker.service           │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cri-dockerd --version                                    │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status containerd --all --full --no-pager      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat containerd --no-pager                      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /lib/systemd/system/containerd.service               │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/containerd/config.toml                          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo containerd config dump                                   │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status crio --all --full --no-pager            │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat crio --no-pager                            │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo crio config                                              │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ delete  │ -p custom-flannel-648600                                                               │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ image   │ newest-cni-525200 image list --format=json                                             │ newest-cni-525200     │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ pause   │ -p newest-cni-525200 --alsologtostderr -v=1                                            │ newest-cni-525200     │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ unpause │ -p newest-cni-525200 --alsologtostderr -v=1                                            │ newest-cni-525200     │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:31:27
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:31:27.429465    2240 out.go:360] Setting OutFile to fd 1904 ...
	I1210 07:31:27.483636    2240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:31:27.483636    2240 out.go:374] Setting ErrFile to fd 1148...
	I1210 07:31:27.483636    2240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:31:27.498633    2240 out.go:368] Setting JSON to false
	I1210 07:31:27.500624    2240 start.go:133] hostinfo: {"hostname":"minikube4","uptime":10819,"bootTime":1765341068,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 07:31:27.500624    2240 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 07:31:27.505874    2240 out.go:179] * [custom-flannel-648600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 07:31:27.510785    2240 notify.go:221] Checking for updates...
	I1210 07:31:27.513604    2240 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:31:27.516776    2240 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:31:27.521423    2240 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 07:31:27.524646    2240 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:31:27.526628    2240 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1210 07:31:23.340249    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:31:27.530138    2240 config.go:182] Loaded profile config "false-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:31:27.530637    2240 config.go:182] Loaded profile config "newest-cni-525200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:31:27.530927    2240 config.go:182] Loaded profile config "no-preload-099700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:31:27.531072    2240 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:31:27.674116    2240 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 07:31:27.679999    2240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:31:27.935225    2240 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:31:27.906881904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:31:27.940210    2240 out.go:179] * Using the docker driver based on user configuration
	I1210 07:31:27.947210    2240 start.go:309] selected driver: docker
	I1210 07:31:27.947210    2240 start.go:927] validating driver "docker" against <nil>
	I1210 07:31:27.947210    2240 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:31:28.038927    2240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:31:28.306393    2240 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:31:28.276193336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:31:28.307456    2240 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:31:28.308474    2240 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:31:28.311999    2240 out.go:179] * Using Docker Desktop driver with root privileges
	I1210 07:31:28.314563    2240 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1210 07:31:28.314921    2240 start_flags.go:336] Found "testdata\\kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1210 07:31:28.314921    2240 start.go:353] cluster config:
	{Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:31:28.317704    2240 out.go:179] * Starting "custom-flannel-648600" primary control-plane node in "custom-flannel-648600" cluster
	I1210 07:31:28.318967    2240 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 07:31:28.320981    2240 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:31:23.421229    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:23.421229    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:23.460218    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:23.460218    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:23.544413    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:23.535582    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.536730    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.537358    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.539749    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.540912    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:23.535582    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.536730    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.537358    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.539749    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.540912    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:26.050161    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:26.077105    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:26.111827    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.111827    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:26.116713    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:26.160114    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.160114    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:26.163744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:26.201139    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.201139    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:26.204831    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:26.240411    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.240462    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:26.244533    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:26.280463    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.280463    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:26.285443    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:26.317450    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.317450    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:26.320454    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:26.356058    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.356058    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:26.360642    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:26.406955    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.406994    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:26.407032    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:26.407032    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:26.486801    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:26.486845    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:26.525844    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:26.525844    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:26.629730    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:26.619679    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.620896    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.621633    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623054    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623677    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:26.619679    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.620896    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.621633    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623054    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623677    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:26.630733    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:26.630733    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:26.786973    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:26.786973    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:28.323967    2240 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:31:28.323967    2240 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 07:31:28.370604    2240 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:31:28.410253    2240 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:31:28.410253    2240 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:31:28.586590    2240 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:31:28.586590    2240 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\config.json ...
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:31:28.586590    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\config.json: {Name:mk37135597d0b3e0094e1cb1b5ff50d942db06b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:31:28.587928    2240 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:31:28.587928    2240 start.go:360] acquireMachinesLock for custom-flannel-648600: {Name:mk4a3a34c58cff29c46217d57a91ed79fc9f522b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:28.588459    2240 start.go:364] duration metric: took 531.3µs to acquireMachinesLock for "custom-flannel-648600"
	I1210 07:31:28.588615    2240 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false D
isableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:31:28.588742    2240 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:31:28.592548    2240 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:31:28.593172    2240 start.go:159] libmachine.API.Create for "custom-flannel-648600" (driver="docker")
	I1210 07:31:28.593172    2240 client.go:173] LocalClient.Create starting
	I1210 07:31:28.593172    2240 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1210 07:31:28.594367    2240 main.go:143] libmachine: Decoding PEM data...
	I1210 07:31:28.594367    2240 main.go:143] libmachine: Parsing certificate...
	I1210 07:31:28.594367    2240 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1210 07:31:28.595268    2240 main.go:143] libmachine: Decoding PEM data...
	I1210 07:31:28.595268    2240 main.go:143] libmachine: Parsing certificate...
	I1210 07:31:28.601656    2240 cli_runner.go:164] Run: docker network inspect custom-flannel-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:31:28.702719    2240 cli_runner.go:211] docker network inspect custom-flannel-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:31:28.710721    2240 network_create.go:284] running [docker network inspect custom-flannel-648600] to gather additional debugging logs...
	I1210 07:31:28.710721    2240 cli_runner.go:164] Run: docker network inspect custom-flannel-648600
	W1210 07:31:28.938963    2240 cli_runner.go:211] docker network inspect custom-flannel-648600 returned with exit code 1
	I1210 07:31:28.938963    2240 network_create.go:287] error running [docker network inspect custom-flannel-648600]: docker network inspect custom-flannel-648600: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-648600 not found
	I1210 07:31:28.938963    2240 network_create.go:289] output of [docker network inspect custom-flannel-648600]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-648600 not found
	
	** /stderr **
	I1210 07:31:28.945949    2240 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:31:29.091971    2240 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:29.381586    2240 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:29.465291    2240 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016a8ae0}
	I1210 07:31:29.465291    2240 network_create.go:124] attempt to create docker network custom-flannel-648600 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1210 07:31:29.470056    2240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600
	W1210 07:31:30.046347    2240 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600 returned with exit code 1
	W1210 07:31:30.046347    2240 network_create.go:149] failed to create docker network custom-flannel-648600 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1210 07:31:30.046347    2240 network_create.go:116] failed to create docker network custom-flannel-648600 192.168.67.0/24, will retry: subnet is taken
	I1210 07:31:30.140283    2240 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:30.262644    2240 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017e1d40}
	I1210 07:31:30.262866    2240 network_create.go:124] attempt to create docker network custom-flannel-648600 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 07:31:30.267646    2240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600
	W1210 07:31:30.581811    2240 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600 returned with exit code 1
	W1210 07:31:30.581811    2240 network_create.go:149] failed to create docker network custom-flannel-648600 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1210 07:31:30.581811    2240 network_create.go:116] failed to create docker network custom-flannel-648600 192.168.76.0/24, will retry: subnet is taken
	I1210 07:31:30.621040    2240 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:30.648052    2240 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cde450}
	I1210 07:31:30.648052    2240 network_create.go:124] attempt to create docker network custom-flannel-648600 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1210 07:31:30.656045    2240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600
	I1210 07:31:30.870907    2240 network_create.go:108] docker network custom-flannel-648600 192.168.85.0/24 created
	I1210 07:31:30.870907    2240 kic.go:121] calculated static IP "192.168.85.2" for the "custom-flannel-648600" container
	I1210 07:31:30.881906    2240 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:31:31.006456    2240 cli_runner.go:164] Run: docker volume create custom-flannel-648600 --label name.minikube.sigs.k8s.io=custom-flannel-648600 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:31:31.098467    2240 oci.go:103] Successfully created a docker volume custom-flannel-648600
	I1210 07:31:31.104469    2240 cli_runner.go:164] Run: docker run --rm --name custom-flannel-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-648600 --entrypoint /usr/bin/test -v custom-flannel-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 07:31:31.792496    2240 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.792496    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1210 07:31:31.792496    2240 cache.go:107] acquiring lock: {Name:mk712909f8cebe825fb8a435a44c90c8d2ca7c32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.792496    2240 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.2058554s
	I1210 07:31:31.792496    2240 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1210 07:31:31.792496    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 exists
	I1210 07:31:31.792496    2240 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.34.3" took 3.2053301s
	I1210 07:31:31.792496    2240 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 succeeded
	I1210 07:31:31.794500    2240 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.794500    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1210 07:31:31.794500    2240 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.2078599s
	I1210 07:31:31.795487    2240 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1210 07:31:31.796493    2240 cache.go:107] acquiring lock: {Name:mk4347e5873954a8abe84535c3dbf0018de433f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.796493    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 exists
	I1210 07:31:31.796493    2240 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.34.3" took 3.2098526s
	I1210 07:31:31.796493    2240 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 succeeded
	I1210 07:31:31.809204    2240 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.809204    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1210 07:31:31.809204    2240 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.2225634s
	I1210 07:31:31.809728    2240 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1210 07:31:31.821783    2240 cache.go:107] acquiring lock: {Name:mk2c17fa70a505b9214cef4e32cab64efc0821c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.822582    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 exists
	I1210 07:31:31.822582    2240 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.34.3" took 3.2354164s
	I1210 07:31:31.822582    2240 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 succeeded
	I1210 07:31:31.828690    2240 cache.go:107] acquiring lock: {Name:mk6b392ee3857c8c549222e5d4bc0d459f0d6374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.828690    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 exists
	I1210 07:31:31.828690    2240 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.12.1" took 3.2420491s
	I1210 07:31:31.828690    2240 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 succeeded
	I1210 07:31:31.868175    2240 cache.go:107] acquiring lock: {Name:mkc3ba12d2dc6e754bbb22b72e061ebdaf85fee6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.869189    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 exists
	I1210 07:31:31.869189    2240 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.34.3" took 3.2820228s
	I1210 07:31:31.869189    2240 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 succeeded
	I1210 07:31:31.869189    2240 cache.go:87] Successfully saved all images to host disk.
	I1210 07:31:29.397246    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:29.477876    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:29.605797    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.605797    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:29.612110    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:29.728807    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.728807    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:29.734404    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:29.836328    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.836328    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:29.841346    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:29.932721    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.933712    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:29.938725    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:30.029301    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.029301    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:30.034503    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:30.132157    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.132157    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:30.137284    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:30.276443    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.276443    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:30.284280    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:30.440215    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.440215    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:30.440215    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:30.440215    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:30.586863    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:30.586863    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:30.654056    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:30.654056    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:30.825025    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:30.810195    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.812029    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.813542    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.815272    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.818214    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:30.810195    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.812029    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.813542    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.815272    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.818214    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:30.825083    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:30.825083    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:30.883913    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:30.883913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:32.772569    2240 cli_runner.go:217] Completed: docker run --rm --name custom-flannel-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-648600 --entrypoint /usr/bin/test -v custom-flannel-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.6680738s)
	I1210 07:31:32.772569    2240 oci.go:107] Successfully prepared a docker volume custom-flannel-648600
	I1210 07:31:32.772569    2240 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:31:32.777565    2240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:31:33.023291    2240 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:31:33.001747684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:31:33.027286    2240 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:31:33.264619    2240 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-648600 --name custom-flannel-648600 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-648600 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-648600 --network custom-flannel-648600 --ip 192.168.85.2 --volume custom-flannel-648600:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 07:31:34.003194    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Running}}
	I1210 07:31:34.069196    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:31:34.137196    2240 cli_runner.go:164] Run: docker exec custom-flannel-648600 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:31:34.255530    2240 oci.go:144] the created container "custom-flannel-648600" has a running status.
	I1210 07:31:34.255530    2240 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa...
	I1210 07:31:34.371827    2240 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:31:34.454671    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:31:34.514682    2240 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:31:34.514682    2240 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-648600 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:31:34.665673    2240 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa...
	I1210 07:31:37.044619    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:31:37.095607    2240 machine.go:94] provisionDockerMachine start ...
	I1210 07:31:37.098607    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:37.155601    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:37.171620    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:37.171620    2240 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:31:37.347331    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-648600
	
	I1210 07:31:37.347331    2240 ubuntu.go:182] provisioning hostname "custom-flannel-648600"
	I1210 07:31:37.350327    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:37.408671    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:37.409222    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:37.409222    2240 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-648600 && echo "custom-flannel-648600" | sudo tee /etc/hostname
	W1210 07:31:33.500806    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:31:33.522798    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:33.542801    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:33.574796    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.574796    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:33.577799    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:33.609805    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.609805    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:33.613806    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:33.647528    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.647528    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:33.650525    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:33.682527    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.683531    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:33.686536    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:33.715528    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.715528    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:33.718520    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:33.752522    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.752522    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:33.755526    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:33.789961    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.789961    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:33.794804    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:33.824805    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.824805    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:33.824805    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:33.824805    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:33.908771    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:33.908771    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:33.958763    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:33.958763    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:34.080194    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:34.067865    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.069363    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.070689    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.072220    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.073030    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:34.067865    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.069363    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.070689    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.072220    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.073030    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:34.080194    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:34.080194    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:34.114208    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:34.114208    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:36.683658    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:36.704830    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:36.739690    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.739690    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:36.742694    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:36.772249    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.772249    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:36.776265    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:36.812803    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.812803    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:36.816811    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:36.849259    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.849259    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:36.852518    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:36.890605    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.890605    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:36.895610    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:36.937605    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.937605    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:36.942601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:36.979599    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.979599    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:36.984601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:37.022606    1436 logs.go:282] 0 containers: []
	W1210 07:31:37.022606    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:37.022606    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:37.022606    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:37.086612    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:37.086612    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:37.128602    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:37.128602    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:37.225605    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:37.215773    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.216591    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.218566    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.219365    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.221626    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:37.215773    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.216591    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.218566    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.219365    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.221626    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:37.225605    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:37.225605    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:37.254615    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:37.254615    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:37.617301    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-648600
	
	I1210 07:31:37.621329    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:37.680493    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:37.681514    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:37.681514    2240 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-648600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-648600/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-648600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:31:37.850452    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:31:37.850452    2240 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 07:31:37.850452    2240 ubuntu.go:190] setting up certificates
	I1210 07:31:37.850452    2240 provision.go:84] configureAuth start
	I1210 07:31:37.855263    2240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-648600
	I1210 07:31:37.926854    2240 provision.go:143] copyHostCerts
	I1210 07:31:37.927569    2240 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 07:31:37.927608    2240 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 07:31:37.928059    2240 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 07:31:37.928961    2240 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 07:31:37.928961    2240 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 07:31:37.928961    2240 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 07:31:37.930358    2240 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 07:31:37.930390    2240 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 07:31:37.930744    2240 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 07:31:37.931754    2240 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.custom-flannel-648600 san=[127.0.0.1 192.168.85.2 custom-flannel-648600 localhost minikube]
	I1210 07:31:38.038131    2240 provision.go:177] copyRemoteCerts
	I1210 07:31:38.042277    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:31:38.045314    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.098793    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:38.243502    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:31:38.284050    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I1210 07:31:38.320436    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:31:38.351829    2240 provision.go:87] duration metric: took 501.3694ms to configureAuth
	I1210 07:31:38.351829    2240 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:31:38.352840    2240 config.go:182] Loaded profile config "custom-flannel-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:31:38.355824    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.405824    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:38.405824    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:38.405824    2240 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 07:31:38.582107    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 07:31:38.582107    2240 ubuntu.go:71] root file system type: overlay
	I1210 07:31:38.582107    2240 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 07:31:38.585874    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.646407    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:38.646407    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:38.646407    2240 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 07:31:38.847766    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 07:31:38.852241    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.938899    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:38.938899    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:38.938899    2240 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 07:31:40.711527    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-10 07:31:38.832035101 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1210 07:31:40.711665    2240 machine.go:97] duration metric: took 3.616002s to provisionDockerMachine
	I1210 07:31:40.711665    2240 client.go:176] duration metric: took 12.1183047s to LocalClient.Create
	I1210 07:31:40.711665    2240 start.go:167] duration metric: took 12.1183047s to libmachine.API.Create "custom-flannel-648600"
	I1210 07:31:40.711665    2240 start.go:293] postStartSetup for "custom-flannel-648600" (driver="docker")
	I1210 07:31:40.711665    2240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:31:40.715645    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:31:40.718723    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:40.776513    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:40.917451    2240 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:31:40.923444    2240 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:31:40.923444    2240 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:31:40.923444    2240 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 07:31:40.924452    2240 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 07:31:40.924452    2240 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 07:31:40.929458    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:31:40.942452    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 07:31:40.977491    2240 start.go:296] duration metric: took 265.8211ms for postStartSetup
	I1210 07:31:40.981481    2240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-648600
	I1210 07:31:41.034489    2240 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\config.json ...
	I1210 07:31:41.039496    2240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:31:41.043532    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:41.111672    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
E1210 07:35:57.413523   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
	I1210 07:31:41.255080    2240 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:31:41.269938    2240 start.go:128] duration metric: took 12.6809984s to createHost
	I1210 07:31:41.269938    2240 start.go:83] releasing machines lock for "custom-flannel-648600", held for 12.6812262s
	I1210 07:31:41.273664    2240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-648600
	I1210 07:31:41.324666    2240 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 07:31:41.329678    2240 ssh_runner.go:195] Run: cat /version.json
	I1210 07:31:41.329678    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:41.334670    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:41.381680    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:41.381680    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	W1210 07:31:41.497715    2240 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 07:31:41.501431    2240 ssh_runner.go:195] Run: systemctl --version
	I1210 07:31:41.518880    2240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:31:41.528176    2240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:31:41.531184    2240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:31:41.579185    2240 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 07:31:41.579185    2240 start.go:496] detecting cgroup driver to use...
	I1210 07:31:41.579185    2240 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:31:41.579185    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1210 07:31:41.596178    2240 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 07:31:41.596178    2240 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 07:31:41.606178    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:31:41.626187    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:31:41.641198    2240 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:31:41.645182    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:31:41.668187    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:31:41.687179    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:31:41.706179    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:31:41.724180    2240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:31:41.742180    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:31:41.759185    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:31:41.778184    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:31:41.795180    2240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:31:41.811185    2240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:31:41.828187    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:41.983806    2240 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:31:42.163822    2240 start.go:496] detecting cgroup driver to use...
	I1210 07:31:42.163822    2240 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:31:42.167818    2240 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 07:31:42.193819    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:31:42.216825    2240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:31:42.280833    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:31:42.301820    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:31:42.320823    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:31:42.345832    2240 ssh_runner.go:195] Run: which cri-dockerd
	I1210 07:31:42.358831    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 07:31:42.373835    2240 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 07:31:42.401822    2240 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 07:31:39.808959    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:39.828946    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:39.859949    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.859949    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:39.862944    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:39.896961    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.896961    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:39.901952    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:39.936950    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.936950    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:39.939955    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:39.969949    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.969949    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:39.972954    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:40.002949    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.002949    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:40.006946    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:40.036957    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.036957    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:40.039947    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:40.098959    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.098959    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:40.102955    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:40.149957    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.149957    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:40.149957    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:40.149957    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:40.191850    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:40.192845    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:40.293665    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:40.277190    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283148    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283982    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.286428    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.287149    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:40.277190    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283148    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283982    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.286428    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.287149    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:40.293665    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:40.293665    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:40.325883    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:40.325883    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:40.379885    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:40.379885    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:42.947835    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:42.966833    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:43.000857    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.000857    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:43.003835    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:43.034830    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.034830    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:43.037843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:43.069836    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.069836    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:43.073842    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:43.105424    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.105465    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:43.109492    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:43.143411    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.143411    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:43.147409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:43.179168    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.179168    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:43.183167    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:43.211281    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.211281    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:43.214141    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:43.248141    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.248141    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:43.248141    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:43.248141    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:43.314876    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:43.314876    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:43.357233    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:43.357233    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 07:31:42.551686    2240 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 07:31:42.712827    2240 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 07:31:42.712827    2240 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 07:31:42.735824    2240 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 07:31:42.756828    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:42.906845    2240 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 07:31:43.937123    2240 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0302614s)
	I1210 07:31:43.944887    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:31:43.971819    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 07:31:43.996364    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:31:44.030377    2240 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 07:31:44.173489    2240 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 07:31:44.332105    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:44.483148    2240 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 07:31:44.509404    2240 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 07:31:44.533765    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:44.690011    2240 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 07:31:44.790147    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:31:44.810716    2240 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 07:31:44.813714    2240 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 07:31:44.820719    2240 start.go:564] Will wait 60s for crictl version
	I1210 07:31:44.824717    2240 ssh_runner.go:195] Run: which crictl
	I1210 07:31:44.835701    2240 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:31:44.880457    2240 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 07:31:44.883920    2240 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:31:44.928460    2240 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:31:45.060104    2240 out.go:252] * Preparing Kubernetes v1.34.3 on Docker 29.1.2 ...
	I1210 07:31:45.062900    2240 cli_runner.go:164] Run: docker exec -t custom-flannel-648600 dig +short host.docker.internal
	I1210 07:31:45.193754    2240 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 07:31:45.197851    2240 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 07:31:45.204880    2240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:31:45.225085    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:45.282870    2240 kubeadm.go:884] updating cluster {Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:31:45.283875    2240 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:31:45.286873    2240 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 07:31:45.317078    2240 docker.go:691] Got preloaded images: 
	I1210 07:31:45.317078    2240 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.3 wasn't preloaded
	I1210 07:31:45.317078    2240 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.3 registry.k8s.io/kube-controller-manager:v1.34.3 registry.k8s.io/kube-scheduler:v1.34.3 registry.k8s.io/kube-proxy:v1.34.3 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 07:31:45.330428    2240 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:45.336331    2240 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.341435    2240 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:45.341435    2240 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.347452    2240 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.347452    2240 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.352434    2240 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:45.355426    2240 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.358455    2240 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:45.361429    2240 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.365434    2240 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 07:31:45.366439    2240 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:45.369440    2240 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:45.370428    2240 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:45.374431    2240 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 07:31:45.379430    2240 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	W1210 07:31:45.411422    2240 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.466193    2240 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.518621    2240 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.573883    2240 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.622874    2240 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.672905    2240 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.723034    2240 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.771034    2240 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.12.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 07:31:45.842424    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.842823    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.869734    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.884370    2240 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.3" does not exist at hash "aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c" in container runtime
	I1210 07:31:45.884370    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:31:45.884370    2240 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.3" needs transfer: "registry.k8s.io/kube-proxy:v1.34.3" does not exist at hash "36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691" in container runtime
	I1210 07:31:45.884370    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:31:45.884370    2240 docker.go:338] Removing image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.884370    2240 docker.go:338] Removing image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.890739    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.890951    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.897121    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:45.901151    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:45.922366    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 07:31:45.956325    2240 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.3" does not exist at hash "5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942" in container runtime
	I1210 07:31:45.956325    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:31:45.956325    2240 docker.go:338] Removing image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.961320    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.992754    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:46.045432    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:31:46.045432    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:31:46.053082    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:31:46.053082    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:31:46.059786    2240 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.3" does not exist at hash "aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78" in container runtime
	I1210 07:31:46.059786    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:31:46.059786    2240 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 07:31:46.060783    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:31:46.060783    2240 docker.go:338] Removing image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:46.060783    2240 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:46.065694    2240 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 07:31:46.065694    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:31:46.065694    2240 docker.go:338] Removing image: registry.k8s.io/pause:3.10.1
	I1210 07:31:46.067530    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:46.067911    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:46.068609    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:31:46.070610    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1210 07:31:46.073597    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:46.074603    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:31:46.146816    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.3': No such file or directory
	I1210 07:31:46.146816    2240 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1210 07:31:46.146816    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.3': No such file or directory
	I1210 07:31:46.146816    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:31:46.147805    2240 docker.go:338] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:46.147805    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 --> /var/lib/minikube/images/kube-apiserver_v1.34.3 (27075584 bytes)
	I1210 07:31:46.147805    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 --> /var/lib/minikube/images/kube-proxy_v1.34.3 (25966592 bytes)
	I1210 07:31:46.151807    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:46.255114    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:31:46.255114    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:31:46.261151    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:31:46.262119    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:31:46.272115    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:31:46.272115    2240 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 07:31:46.272115    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:31:46.272115    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.3': No such file or directory
	I1210 07:31:46.272115    2240 docker.go:338] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:46.272115    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 --> /var/lib/minikube/images/kube-controller-manager_v1.34.3 (22830080 bytes)
	I1210 07:31:46.277116    2240 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:46.278121    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 07:31:46.289109    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:31:46.293116    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:31:46.475787    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.3': No such file or directory
	I1210 07:31:46.475787    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 07:31:46.475787    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 07:31:46.475787    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 --> /var/lib/minikube/images/kube-scheduler_v1.34.3 (17393664 bytes)
	I1210 07:31:46.476808    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 07:31:46.476808    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:31:46.476808    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 07:31:46.476808    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1210 07:31:46.476808    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1210 07:31:46.481795    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:31:46.504793    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 07:31:46.504793    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 07:31:46.672791    2240 docker.go:305] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 07:31:46.672791    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1210 07:31:47.172597    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 from cache
	I1210 07:31:47.208589    2240 docker.go:305] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:31:47.208589    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	W1210 07:31:43.531620    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:31:43.451546    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:43.441909    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443033    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443856    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.446062    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.447664    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:43.441909    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443033    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443856    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.446062    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.447664    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:43.452560    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:43.452560    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:43.479539    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:43.479539    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:46.056731    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:46.081601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:46.111531    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.111531    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:46.116512    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:46.149808    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.149808    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:46.155807    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:46.190791    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.190791    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:46.193789    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:46.232109    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.232109    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:46.235109    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:46.269122    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.269122    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:46.273122    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:46.302130    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.302130    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:46.306119    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:46.338110    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.338110    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:46.341114    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:46.370305    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.370305    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:46.370305    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:46.370305    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:46.438787    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:46.438787    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:46.605791    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:46.605791    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:46.756762    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:46.747167    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.748310    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.749473    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.750856    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.751642    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:46.747167    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.748310    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.749473    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.750856    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.751642    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:46.756762    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:46.756762    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:46.793764    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:46.793764    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:48.287161    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load": (1.0785558s)
	I1210 07:31:48.287161    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I1210 07:31:48.287161    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:31:48.287161    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.34.3 | docker load"
	I1210 07:31:51.130300    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.34.3 | docker load": (2.8430943s)
	I1210 07:31:51.130300    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 from cache
	I1210 07:31:51.130300    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:31:51.130300    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.34.3 | docker load"
	I1210 07:31:52.383759    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.34.3 | docker load": (1.2534401s)
	I1210 07:31:52.383759    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 from cache
	I1210 07:31:52.383759    2240 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:31:52.383759    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	I1210 07:31:49.381174    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:49.403703    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:49.436264    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.436317    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:49.440617    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:49.468917    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.468982    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:49.472677    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:49.499977    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.499977    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:49.504116    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:49.536309    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.536350    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:49.540463    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:49.568274    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.568274    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:49.572177    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:49.600130    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.600130    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:49.604000    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:49.632645    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.632645    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:49.636092    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:49.667017    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.667017    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:49.667017    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:49.667017    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:49.705515    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:49.705515    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:49.790780    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:49.782366    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.783658    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.784961    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.786128    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.787161    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:49.782366    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.783658    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.784961    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.786128    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.787161    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:49.790780    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:49.790780    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:49.817781    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:49.817781    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:49.871600    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:49.871674    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:52.448511    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:52.475325    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:52.506360    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.506360    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:52.510172    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:52.540147    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.540147    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:52.544437    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:52.575774    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.575774    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:52.579336    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:52.610061    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.610061    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:52.613342    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:52.642765    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.642765    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:52.649215    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:52.678701    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.678701    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:52.682526    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:52.710203    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.710203    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:52.715870    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:52.745326    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.745351    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:52.745351    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:52.745397    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:52.811401    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:52.811401    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:52.853138    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:52.853138    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:52.968335    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:52.959589    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.960835    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.961833    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.962872    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.963541    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:52.959589    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.960835    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.961833    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.962872    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.963541    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:52.968335    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:52.968335    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:52.995279    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:52.995802    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:55.245680    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (2.8618761s)
	I1210 07:31:55.245680    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1210 07:31:55.246466    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:31:55.246522    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.34.3 | docker load"
	I1210 07:31:56.790187    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.34.3 | docker load": (1.5436405s)
	I1210 07:31:56.790187    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 from cache
	I1210 07:31:56.790187    2240 docker.go:305] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:31:56.790187    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.12.1 | docker load"
	W1210 07:31:53.564945    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:31:55.548093    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:55.571449    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:55.603901    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.603970    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:55.607695    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:55.639065    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.639065    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:55.643536    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:55.671930    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.671930    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:55.675998    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:55.704460    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.704460    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:55.708947    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:55.739257    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.739257    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:55.742852    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:55.772295    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.772344    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:55.776423    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:55.803812    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.803812    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:55.809849    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:55.841586    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.841647    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:55.841647    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:55.841647    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:55.916368    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:55.916368    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:55.958653    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:55.958653    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:56.055702    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:56.041972    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.043936    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.045926    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.047763    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.048444    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:56.041972    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.043936    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.045926    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.047763    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.048444    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:56.055702    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:56.055702    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:56.084883    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:56.084883    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:01.290113    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.12.1 | docker load": (4.4998566s)
	I1210 07:32:01.290113    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 from cache
	I1210 07:32:01.290113    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:32:01.290113    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.34.3 | docker load"
	I1210 07:31:58.642350    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:58.668189    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:58.699633    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.699633    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:58.705036    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:58.738553    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.738553    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:58.742579    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:58.772414    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.772414    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:58.775757    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:58.804872    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.804872    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:58.808509    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:58.835398    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.835398    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:58.843124    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:58.871465    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.871465    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:58.875535    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:58.905029    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.905108    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:58.910324    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:58.953100    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.953100    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:58.953100    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:58.953100    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:59.012946    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:59.012946    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:59.052964    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:59.052964    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:59.146228    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:59.133962    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.135271    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.136467    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.137230    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.139872    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:59.133962    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.135271    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.136467    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.137230    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.139872    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:59.146228    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:59.146228    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:59.173200    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:59.173200    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:01.725170    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:01.746739    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:01.779670    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.779670    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:01.783967    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:01.812617    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.812617    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:01.817482    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:01.848083    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.848083    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:01.852344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:01.883648    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.883648    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:01.887655    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:01.918403    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.918403    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:01.922409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:01.961721    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.961721    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:01.969744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:01.998302    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.998302    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:02.003804    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:02.032315    1436 logs.go:282] 0 containers: []
	W1210 07:32:02.032315    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:02.032315    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:02.032315    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:02.096900    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:02.096900    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:02.136137    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:02.136137    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:02.227732    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:02.216961    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.218197    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.219273    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.220321    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.221438    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:02.216961    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.218197    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.219273    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.220321    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.221438    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:02.227732    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:02.227732    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:02.255236    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:02.255236    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:03.670542    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.34.3 | docker load": (2.3803916s)
	I1210 07:32:03.670542    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 from cache
	I1210 07:32:03.670542    2240 cache_images.go:125] Successfully loaded all cached images
	I1210 07:32:03.670542    2240 cache_images.go:94] duration metric: took 18.3531776s to LoadCachedImages
	I1210 07:32:03.670542    2240 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 docker true true} ...
	I1210 07:32:03.670542    2240 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-648600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml}
	I1210 07:32:03.674057    2240 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 07:32:03.753844    2240 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1210 07:32:03.753844    2240 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:32:03.753844    2240 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-648600 NodeName:custom-flannel-648600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:32:03.753844    2240 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "custom-flannel-648600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:32:03.758233    2240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 07:32:03.772950    2240 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.3': No such file or directory
	
	Initiating transfer...
	I1210 07:32:03.777455    2240 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.3
	I1210 07:32:03.790145    2240 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet.sha256
	I1210 07:32:03.790145    2240 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 07:32:03.790145    2240 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
	I1210 07:32:03.796039    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:32:03.796814    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm
	I1210 07:32:03.796843    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl
	I1210 07:32:03.817843    2240 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubeadm': No such file or directory
	I1210 07:32:03.818011    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubeadm --> /var/lib/minikube/binaries/v1.34.3/kubeadm (74027192 bytes)
	I1210 07:32:03.818298    2240 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubectl': No such file or directory
	I1210 07:32:03.818803    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubectl --> /var/lib/minikube/binaries/v1.34.3/kubectl (60563640 bytes)
	I1210 07:32:03.822978    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet
	I1210 07:32:03.833074    2240 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubelet': No such file or directory
	I1210 07:32:03.833638    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubelet --> /var/lib/minikube/binaries/v1.34.3/kubelet (59203876 bytes)
	I1210 07:32:05.838364    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:32:05.850364    2240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1210 07:32:05.870151    2240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 07:32:05.891336    2240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1210 07:32:05.915010    2240 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:32:05.922767    2240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:32:05.942185    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:32:06.099167    2240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:32:06.121581    2240 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600 for IP: 192.168.85.2
	I1210 07:32:06.121613    2240 certs.go:195] generating shared ca certs ...
	I1210 07:32:06.121640    2240 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.121920    2240 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 07:32:06.122447    2240 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 07:32:06.122578    2240 certs.go:257] generating profile certs ...
	I1210 07:32:06.122578    2240 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.key
	I1210 07:32:06.122578    2240 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.crt with IP's: []
	I1210 07:32:06.321440    2240 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.crt ...
	I1210 07:32:06.321440    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.crt: {Name:mk30a4977cc0d8ffd50678b3c23caa1e53531dd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.322223    2240 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.key ...
	I1210 07:32:06.322223    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.key: {Name:mke10982a653bbe15c8edebf2f43dc216f9268be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.323200    2240 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba
	I1210 07:32:06.323200    2240 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1210 07:32:06.341062    2240 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba ...
	I1210 07:32:06.341062    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba: {Name:mk0e9e825524eecc7aedfd18bb3bfe0b08c0466c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.342014    2240 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba ...
	I1210 07:32:06.342014    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba: {Name:mk42b80e536f4c7e07cd83fa60afbb5af1e6e8b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.342947    2240 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt
	I1210 07:32:06.354920    2240 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key
	I1210 07:32:06.355812    2240 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key
	I1210 07:32:06.355812    2240 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt with IP's: []
	I1210 07:32:06.438517    2240 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt ...
	I1210 07:32:06.438517    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt: {Name:mk49d63357d91f886b5db1adca8a8959ac8a2637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.439596    2240 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key ...
	I1210 07:32:06.439596    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key: {Name:mkd00fe816a16ba7636ee1faff5584095510b505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.454147    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 07:32:06.454968    2240 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 07:32:06.454968    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 07:32:06.455228    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 07:32:06.455417    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 07:32:06.455581    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 07:32:06.455768    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 07:32:06.456703    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:32:06.490234    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:32:06.516382    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:32:06.546895    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:32:06.579157    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1210 07:32:06.611194    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:32:06.642582    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:32:06.673947    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:32:06.702762    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 07:32:06.734932    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 07:32:06.763895    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:32:06.794884    2240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:32:06.824804    2240 ssh_runner.go:195] Run: openssl version
	I1210 07:32:06.839620    2240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.863187    2240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 07:32:06.881235    2240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.889982    2240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.896266    2240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.945361    2240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:32:06.965592    2240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11304.pem /etc/ssl/certs/51391683.0
	I1210 07:32:06.982615    2240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.000345    2240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 07:32:07.019650    2240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.028440    2240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.032681    2240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.080664    2240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:32:07.098781    2240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/113042.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:32:07.119820    2240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.138968    2240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:32:07.157588    2240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.166110    2240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.169123    2240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.218939    2240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:32:07.238245    2240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:32:07.255844    2240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:32:07.263714    2240 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:32:07.263714    2240 kubeadm.go:401] StartCluster: {Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCore
DNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:32:07.267520    2240 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 07:32:07.300048    2240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:32:07.317060    2240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:32:07.333647    2240 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:32:07.337744    2240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:32:07.353638    2240 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:32:07.353638    2240 kubeadm.go:158] found existing configuration files:
	
	I1210 07:32:07.357869    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:32:07.371538    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:32:07.375620    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:32:07.392582    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:32:07.408459    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:32:07.412872    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:32:07.431340    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:32:07.446697    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:32:07.451332    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:32:07.472431    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	W1210 07:32:03.602967    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:04.810034    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:04.838035    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:04.888039    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.888039    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:04.892025    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:04.955032    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.955032    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:04.959038    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:04.995031    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.995031    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:04.999034    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:05.035036    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.035036    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:05.040047    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:05.079034    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.079034    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:05.084038    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:05.123032    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.123032    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:05.128035    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:05.165033    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.165033    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:05.169028    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:05.205183    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.205183    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:05.205183    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:05.205183    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:05.248358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:05.248358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:05.349366    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:05.339165    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.340548    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.341613    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.342697    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.343919    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:05.339165    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.340548    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.341613    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.342697    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.343919    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:05.349366    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:05.349366    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:05.384377    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:05.384377    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:05.439383    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:05.439383    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:08.021198    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:08.045549    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:08.076568    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.076568    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:08.082429    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:08.113514    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.113514    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:08.117280    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:08.145243    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.145243    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:08.151846    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:08.182475    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.182475    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:08.186570    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:08.214500    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.214554    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:08.218698    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:08.250229    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.250229    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:08.254493    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:08.298394    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.298394    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:08.302457    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:08.331561    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.331561    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:08.331561    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:08.331561    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:08.368913    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:08.368913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 07:32:07.487983    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:32:07.492242    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:32:07.510557    2240 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:32:07.626646    2240 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 07:32:07.630270    2240 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1210 07:32:07.725615    2240 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1210 07:32:08.453343    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:08.442562    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.443722    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.445918    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.447466    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.448822    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:08.442562    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.443722    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.445918    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.447466    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.448822    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:08.453378    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:08.453417    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:08.488219    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:08.488219    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:08.533777    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:08.533777    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:11.100898    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:11.123310    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:11.154369    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.154369    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:11.158211    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:11.188349    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.188419    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:11.191999    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:11.218233    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.218263    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:11.222177    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:11.248157    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.248157    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:11.252075    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:11.280934    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.280934    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:11.284871    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:11.316173    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.316225    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:11.320150    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:11.350432    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.350494    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:11.354282    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:11.381767    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.381819    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:11.381819    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:11.381874    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:11.447079    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:11.447079    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:11.485987    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:11.485987    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:11.568313    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:11.555927    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.557482    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.559214    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.561941    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.562681    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:11.555927    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.557482    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.559214    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.561941    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.562681    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:11.568365    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:11.568408    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:11.599474    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:11.599518    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:13.641314    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:14.165429    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:14.189363    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:14.220411    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.220478    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:14.223878    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:14.253748    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.253798    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:14.257409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:14.288235    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.288235    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:14.291689    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:14.323349    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.323349    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:14.326680    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:14.355227    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.355227    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:14.358704    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:14.389648    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.389648    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:14.393032    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:14.424212    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.424212    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:14.427425    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:14.457834    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.457834    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:14.457834    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:14.457834    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:14.486053    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:14.486053    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:14.538138    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:14.538138    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:14.601542    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:14.601542    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:14.638885    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:14.638885    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:14.724482    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:14.715398    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.716451    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.717228    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.719965    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.720942    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:14.715398    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.716451    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.717228    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.719965    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.720942    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:17.229775    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:17.254115    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:17.287113    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.287113    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:17.292389    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:17.321661    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.321661    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:17.325615    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:17.360140    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.360140    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:17.366346    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:17.402963    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.402963    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:17.406830    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:17.436210    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.436210    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:17.440638    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:17.468315    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.468315    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:17.473002    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:17.516057    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.516057    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:17.519835    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:17.546705    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.546705    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:17.546705    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:17.546705    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:17.575272    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:17.575272    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:17.635882    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:17.635882    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:17.702984    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:17.702984    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:17.738444    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:17.738444    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:17.826329    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:17.816355    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.817532    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.818909    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.821510    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.822595    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:17.816355    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.817532    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.818909    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.821510    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.822595    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:20.331491    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:20.356562    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:20.393733    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.393733    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:20.397542    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:20.424969    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.424969    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:20.430097    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:20.461163    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.461163    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:20.464553    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:20.496041    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.496041    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:20.500386    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:20.528481    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.528481    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:20.533192    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:20.563678    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.563678    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:20.567914    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:20.595909    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.595909    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:20.601427    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:20.633125    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.633125    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:20.633125    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:20.633125    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:20.698742    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:20.698742    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:20.738675    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:20.738675    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:20.832925    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:20.823171    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.824395    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.825664    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.826500    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.828755    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:20.823171    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.824395    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.825664    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.826500    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.828755    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:20.833019    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:20.833050    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:20.863741    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:20.863802    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:23.679657    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:23.424742    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:23.449719    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:23.484921    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.484982    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:23.488818    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:23.520632    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.520718    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:23.525648    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:23.557856    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.557856    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:23.561789    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:23.593782    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.593782    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:23.596770    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:23.629689    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.629689    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:23.633972    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:23.677648    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.677648    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:23.681665    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:23.708735    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.708735    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:23.712484    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:23.742324    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.742324    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:23.742324    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:23.742324    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:23.809315    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:23.809315    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:23.849820    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:23.849820    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:23.932812    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:23.923514    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.925667    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.926732    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.928140    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.929123    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:23.923514    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.925667    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.926732    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.928140    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.929123    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:23.932860    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:23.932896    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:23.962977    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:23.962977    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:26.517198    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:26.545066    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:26.577323    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.577323    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:26.581824    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:26.621178    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.621178    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:26.624162    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:26.657711    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.657711    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:26.661872    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:26.690869    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.690869    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:26.693873    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:26.720949    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.720949    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:26.724289    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:26.757254    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.757254    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:26.761433    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:26.788617    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.788617    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:26.792015    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:26.820229    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.820229    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:26.820229    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:26.820229    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:26.886805    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:26.886805    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:26.926531    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:26.926531    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:27.014343    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:27.001829    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.003812    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.004706    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.007231    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.008445    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:27.001829    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.003812    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.004706    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.007231    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.008445    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:27.014420    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:27.014490    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:27.043375    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:27.043375    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:29.223517    2240 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1210 07:32:29.223517    2240 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:32:29.223517    2240 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:32:29.223517    2240 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:32:29.224269    2240 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:32:29.224467    2240 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:32:29.229027    2240 out.go:252]   - Generating certificates and keys ...
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:32:29.229660    2240 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:32:29.229827    2240 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:32:29.229911    2240 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:32:29.229911    2240 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-648600 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 07:32:29.229911    2240 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:32:29.230468    2240 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-648600 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 07:32:29.230658    2240 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:32:29.230768    2240 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:32:29.230900    2240 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:32:29.231503    2240 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:32:29.231582    2240 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:32:29.231582    2240 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:32:29.234181    2240 out.go:252]   - Booting up control plane ...
	I1210 07:32:29.234181    2240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:32:29.234181    2240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:32:29.234181    2240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:32:29.234702    2240 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:32:29.234874    2240 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:32:29.235782    2240 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:32:29.235782    2240 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002366911s
	I1210 07:32:29.235782    2240 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 07:32:29.235782    2240 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1210 07:32:29.236404    2240 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 07:32:29.236404    2240 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 07:32:29.236404    2240 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 12.235267696s
	I1210 07:32:29.236992    2240 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 12.434241439s
	I1210 07:32:29.236992    2240 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 14.5023353s
	I1210 07:32:29.236992    2240 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 07:32:29.236992    2240 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 07:32:29.237590    2240 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 07:32:29.237590    2240 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-648600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 07:32:29.237590    2240 kubeadm.go:319] [bootstrap-token] Using token: a4ld74.20ve6i3rm5ksexxo
	I1210 07:32:29.239648    2240 out.go:252]   - Configuring RBAC rules ...
	I1210 07:32:29.239648    2240 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 07:32:29.240674    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 07:32:29.240944    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 07:32:29.241383    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 07:32:29.241649    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 07:32:29.241668    2240 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 07:32:29.241668    2240 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 07:32:29.241668    2240 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 07:32:29.241668    2240 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 07:32:29.242197    2240 kubeadm.go:319] 
	I1210 07:32:29.242265    2240 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 07:32:29.242265    2240 kubeadm.go:319] 
	I1210 07:32:29.242265    2240 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 07:32:29.242265    2240 kubeadm.go:319] 
	I1210 07:32:29.242265    2240 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 07:32:29.242265    2240 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 07:32:29.242265    2240 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 07:32:29.242265    2240 kubeadm.go:319] 
	I1210 07:32:29.242850    2240 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 07:32:29.242850    2240 kubeadm.go:319] 
	I1210 07:32:29.242850    2240 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 07:32:29.242850    2240 kubeadm.go:319] 
	I1210 07:32:29.242850    2240 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 07:32:29.242850    2240 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 07:32:29.242850    2240 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 07:32:29.242850    2240 kubeadm.go:319] 
	I1210 07:32:29.243436    2240 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 07:32:29.243436    2240 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 07:32:29.243436    2240 kubeadm.go:319] 
	I1210 07:32:29.243436    2240 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token a4ld74.20ve6i3rm5ksexxo \
	I1210 07:32:29.243436    2240 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2501b4ef72a9792bee16677b10e6bb41bd6980e44395cdcd843b1bcd1dba3c3b \
	I1210 07:32:29.243436    2240 kubeadm.go:319] 	--control-plane 
	I1210 07:32:29.243436    2240 kubeadm.go:319] 
	I1210 07:32:29.244018    2240 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 07:32:29.244018    2240 kubeadm.go:319] 
	I1210 07:32:29.244018    2240 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token a4ld74.20ve6i3rm5ksexxo \
	I1210 07:32:29.244018    2240 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2501b4ef72a9792bee16677b10e6bb41bd6980e44395cdcd843b1bcd1dba3c3b 
	I1210 07:32:29.244018    2240 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1210 07:32:29.246745    2240 out.go:179] * Configuring testdata\kube-flannel.yaml (Container Networking Interface) ...
	I1210 07:32:29.266121    2240 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1210 07:32:29.270492    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1210 07:32:29.280075    2240 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1210 07:32:29.280075    2240 ssh_runner.go:362] scp testdata\kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4578 bytes)
	I1210 07:32:29.314572    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 07:32:29.754597    2240 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 07:32:29.759607    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-648600 minikube.k8s.io/updated_at=2025_12_10T07_32_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=custom-flannel-648600 minikube.k8s.io/primary=true
	I1210 07:32:29.759607    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:29.770603    2240 ops.go:34] apiserver oom_adj: -16
	I1210 07:32:29.895974    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:30.395328    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:30.896828    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:31.396414    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:31.896200    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:32.396778    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:29.599594    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:29.627372    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:29.659982    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.659982    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:29.662983    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:29.694702    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.694702    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:29.700318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:29.732602    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.732602    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:29.735594    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:29.769602    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.769602    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:29.773601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:29.805199    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.805199    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:29.808179    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:29.838578    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.838578    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:29.843641    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:29.878051    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.878051    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:29.881052    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:29.921782    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.921782    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:29.921782    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:29.921782    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:29.991328    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:29.991328    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:30.030358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:30.031358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:30.117974    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:30.107541    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.108605    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.109589    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.110674    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.111579    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:30.107541    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.108605    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.109589    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.110674    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.111579    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:30.118027    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:30.118027    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:30.147934    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:30.147934    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:32.704372    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:32.727813    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:32.762114    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.762228    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:32.767248    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:32.801905    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.801968    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:32.805939    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:32.836433    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.836579    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:32.840369    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:32.870265    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.870265    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:32.874049    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:32.904540    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.904540    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:32.908658    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:32.937325    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.937407    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:32.941191    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:32.974829    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.974893    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:32.980307    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:33.012207    1436 logs.go:282] 0 containers: []
	W1210 07:32:33.012268    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:33.012288    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:33.012288    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:33.062151    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:33.062151    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:33.126084    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:33.126084    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:33.164564    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:33.164564    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:33.252175    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:33.238911    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.239860    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.243396    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.244685    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.245447    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:33.238911    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.239860    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.243396    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.244685    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.245447    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:33.252175    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:33.252175    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:32.894984    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:33.397040    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:33.895777    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:34.084987    2240 kubeadm.go:1114] duration metric: took 4.3302518s to wait for elevateKubeSystemPrivileges
	I1210 07:32:34.085013    2240 kubeadm.go:403] duration metric: took 26.8208803s to StartCluster
	I1210 07:32:34.085095    2240 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:34.085299    2240 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:32:34.087295    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:34.088397    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 07:32:34.088397    2240 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:32:34.088932    2240 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:32:34.089115    2240 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-648600"
	I1210 07:32:34.089115    2240 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-648600"
	I1210 07:32:34.089115    2240 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-648600"
	I1210 07:32:34.089115    2240 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-648600"
	I1210 07:32:34.089272    2240 host.go:66] Checking if "custom-flannel-648600" exists ...
	I1210 07:32:34.089454    2240 config.go:182] Loaded profile config "custom-flannel-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:32:34.091048    2240 out.go:179] * Verifying Kubernetes components...
	I1210 07:32:34.099313    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:32:34.100384    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:32:34.101389    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:32:34.165121    2240 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-648600"
	I1210 07:32:34.165121    2240 host.go:66] Checking if "custom-flannel-648600" exists ...
	I1210 07:32:34.166107    2240 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:32:34.174109    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:32:34.177116    2240 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:32:34.177116    2240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:32:34.181109    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:32:34.228110    2240 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:32:34.228110    2240 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:32:34.231111    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:32:34.232110    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:32:34.295102    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:32:34.361698    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 07:32:34.577307    2240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:32:34.743911    2240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:32:34.748484    2240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:32:35.145540    2240 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1210 07:32:35.149854    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:32:35.210514    2240 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-648600" to be "Ready" ...
	I1210 07:32:35.684992    2240 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-648600" context rescaled to 1 replicas
	I1210 07:32:35.860846    2240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1123448s)
	I1210 07:32:35.863841    2240 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1210 07:32:35.869842    2240 addons.go:530] duration metric: took 1.7814171s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1210 07:32:37.217134    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	W1210 07:32:33.712552    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:35.789401    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:35.810140    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:35.846049    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.846049    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:35.850173    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:35.881840    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.881840    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:35.884841    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:35.913190    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.913190    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:35.916698    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:35.953160    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.953160    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:35.956661    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:35.990725    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.990725    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:35.994362    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:36.027153    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.027153    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:36.031157    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:36.060142    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.060142    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:36.063139    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:36.096214    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.096291    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:36.096291    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:36.096291    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:36.136455    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:36.136455    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:36.228827    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:36.215892    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.217040    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.218413    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.219992    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.221006    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:36.215892    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.217040    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.218413    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.219992    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.221006    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:36.228910    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:36.228944    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:36.260979    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:36.261040    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:36.321946    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:36.321946    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:32:39.747934    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	W1210 07:32:42.215582    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	I1210 07:32:38.893525    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:38.918010    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:38.951682    1436 logs.go:282] 0 containers: []
	W1210 07:32:38.951682    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:38.954817    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:38.986714    1436 logs.go:282] 0 containers: []
	W1210 07:32:38.986714    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:38.992805    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:39.024242    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.024242    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:39.028333    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:39.057504    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.057504    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:39.063178    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:39.093362    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.093362    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:39.097488    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:39.130652    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.130690    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:39.133596    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:39.163556    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.163556    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:39.168915    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:39.202587    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.202587    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:39.202587    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:39.202587    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:39.268647    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:39.268647    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:39.308297    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:39.308297    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:39.438181    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:39.395105    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.396191    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.398354    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.399763    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.401460    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:39.395105    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.396191    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.398354    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.399763    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.401460    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:39.438181    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:39.438181    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:39.467128    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:39.467176    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:42.023591    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:42.047765    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:42.080166    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.080166    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:42.084928    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:42.114905    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.114905    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:42.118820    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:42.148212    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.148212    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:42.151728    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:42.182256    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.182256    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:42.185843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:42.216232    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.216276    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:42.219555    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:42.249214    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.249214    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:42.253469    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:42.281977    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.281977    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:42.285971    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:42.313212    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.314210    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:42.314210    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:42.314210    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:42.382226    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:42.382226    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:42.424358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:42.424358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:42.509116    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:42.500360    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.501554    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.503040    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.504307    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.505418    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:42.500360    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.501554    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.503040    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.504307    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.505418    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:42.509116    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:42.509116    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:42.536096    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:42.536096    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:44.217341    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	I1210 07:32:45.217929    2240 node_ready.go:49] node "custom-flannel-648600" is "Ready"
	I1210 07:32:45.217929    2240 node_ready.go:38] duration metric: took 10.0071872s for node "custom-flannel-648600" to be "Ready" ...
	I1210 07:32:45.217929    2240 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:32:45.221913    2240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:45.241224    2240 api_server.go:72] duration metric: took 11.1520714s to wait for apiserver process to appear ...
	I1210 07:32:45.241248    2240 api_server.go:88] waiting for apiserver healthz status ...
	I1210 07:32:45.241297    2240 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58199/healthz ...
	I1210 07:32:45.255531    2240 api_server.go:279] https://127.0.0.1:58199/healthz returned 200:
	ok
	I1210 07:32:45.259632    2240 api_server.go:141] control plane version: v1.34.3
	I1210 07:32:45.259696    2240 api_server.go:131] duration metric: took 18.4479ms to wait for apiserver health ...
	I1210 07:32:45.259716    2240 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 07:32:45.268791    2240 system_pods.go:59] 7 kube-system pods found
	I1210 07:32:45.268849    2240 system_pods.go:61] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.268849    2240 system_pods.go:61] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.268894    2240 system_pods.go:74] duration metric: took 9.14ms to wait for pod list to return data ...
	I1210 07:32:45.268935    2240 default_sa.go:34] waiting for default service account to be created ...
	I1210 07:32:45.273316    2240 default_sa.go:45] found service account: "default"
	I1210 07:32:45.273353    2240 default_sa.go:55] duration metric: took 4.4181ms for default service account to be created ...
	I1210 07:32:45.273353    2240 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 07:32:45.280767    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:45.280945    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.280945    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
E1210 07:35:57.743869   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
	I1210 07:32:45.280998    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.281064    2240 retry.go:31] will retry after 250.377545ms: missing components: kube-dns
	I1210 07:32:45.539061    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:45.539616    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.539616    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.539718    2240 retry.go:31] will retry after 289.337772ms: missing components: kube-dns
	I1210 07:32:45.840329    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:45.840329    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.840329    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.840528    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.840528    2240 retry.go:31] will retry after 309.196772ms: missing components: kube-dns
	I1210 07:32:46.157293    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:46.157293    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:46.157293    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:46.157293    2240 retry.go:31] will retry after 407.04525ms: missing components: kube-dns
	I1210 07:32:46.592154    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:46.592265    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:46.592265    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:46.592297    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:46.592297    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:46.592318    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:46.592318    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:46.592318    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:46.592318    2240 retry.go:31] will retry after 495.94184ms: missing components: kube-dns
	I1210 07:32:47.094557    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:47.094557    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:47.094557    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:47.094557    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:47.094557    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:47.095074    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:47.095074    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:47.095074    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Running
	I1210 07:32:47.095074    2240 retry.go:31] will retry after 778.892273ms: missing components: kube-dns
	W1210 07:32:43.745046    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:45.087059    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:45.110662    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:45.142133    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.142133    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:45.146341    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:45.178232    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.178232    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:45.182428    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:45.211507    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.211507    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:45.215400    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:45.245805    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.246346    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:45.251790    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:45.299793    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.299793    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:45.304394    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:45.332689    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.332689    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:45.338438    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:45.371989    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.372039    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:45.376951    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:45.411498    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.411558    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:45.411558    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:45.411617    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:45.488591    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:45.489591    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:45.529135    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:45.529135    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:45.627238    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:45.616715    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.617907    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.618773    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.622470    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.623968    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:45.616715    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.617907    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.618773    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.622470    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.623968    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:45.627238    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:45.627238    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:45.659505    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:45.659505    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:48.224164    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:48.247748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:48.276146    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.276253    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:48.279224    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:48.307561    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.307587    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:48.311247    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:48.342268    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.342268    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:48.346481    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:48.379504    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.379504    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:48.384265    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:47.881744    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:47.881744    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:47.881744    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Running
	I1210 07:32:47.882297    2240 retry.go:31] will retry after 913.098856ms: missing components: kube-dns
	I1210 07:32:48.802046    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:48.802046    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Running
	I1210 07:32:48.802046    2240 system_pods.go:126] duration metric: took 3.5286376s to wait for k8s-apps to be running ...
	I1210 07:32:48.802046    2240 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 07:32:48.807470    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:32:48.825598    2240 system_svc.go:56] duration metric: took 23.5517ms WaitForService to wait for kubelet
	I1210 07:32:48.825598    2240 kubeadm.go:587] duration metric: took 14.7364354s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:32:48.825689    2240 node_conditions.go:102] verifying NodePressure condition ...
	I1210 07:32:48.831503    2240 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1210 07:32:48.831503    2240 node_conditions.go:123] node cpu capacity is 16
	I1210 07:32:48.831503    2240 node_conditions.go:105] duration metric: took 5.8138ms to run NodePressure ...
	I1210 07:32:48.831503    2240 start.go:242] waiting for startup goroutines ...
	I1210 07:32:48.831503    2240 start.go:247] waiting for cluster config update ...
	I1210 07:32:48.831503    2240 start.go:256] writing updated cluster config ...
	I1210 07:32:48.837195    2240 ssh_runner.go:195] Run: rm -f paused
	I1210 07:32:48.844148    2240 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:32:48.853005    2240 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dhgpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.864384    2240 pod_ready.go:94] pod "coredns-66bc5c9577-dhgpj" is "Ready"
	I1210 07:32:48.864472    2240 pod_ready.go:86] duration metric: took 11.4282ms for pod "coredns-66bc5c9577-dhgpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.867887    2240 pod_ready.go:83] waiting for pod "etcd-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.876367    2240 pod_ready.go:94] pod "etcd-custom-flannel-648600" is "Ready"
	I1210 07:32:48.876367    2240 pod_ready.go:86] duration metric: took 8.4794ms for pod "etcd-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.880884    2240 pod_ready.go:83] waiting for pod "kube-apiserver-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.888453    2240 pod_ready.go:94] pod "kube-apiserver-custom-flannel-648600" is "Ready"
	I1210 07:32:48.888453    2240 pod_ready.go:86] duration metric: took 7.5694ms for pod "kube-apiserver-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.891939    2240 pod_ready.go:83] waiting for pod "kube-controller-manager-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:49.254863    2240 pod_ready.go:94] pod "kube-controller-manager-custom-flannel-648600" is "Ready"
	I1210 07:32:49.255015    2240 pod_ready.go:86] duration metric: took 363.0699ms for pod "kube-controller-manager-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:49.454047    2240 pod_ready.go:83] waiting for pod "kube-proxy-vrrgr" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:49.854254    2240 pod_ready.go:94] pod "kube-proxy-vrrgr" is "Ready"
	I1210 07:32:49.854329    2240 pod_ready.go:86] duration metric: took 400.2758ms for pod "kube-proxy-vrrgr" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.054101    2240 pod_ready.go:83] waiting for pod "kube-scheduler-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.453713    2240 pod_ready.go:94] pod "kube-scheduler-custom-flannel-648600" is "Ready"
	I1210 07:32:50.453713    2240 pod_ready.go:86] duration metric: took 399.6056ms for pod "kube-scheduler-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.453713    2240 pod_ready.go:40] duration metric: took 1.6095401s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:32:50.552047    2240 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 07:32:50.555856    2240 out.go:179] * Done! kubectl is now configured to use "custom-flannel-648600" cluster and "default" namespace by default
	I1210 07:32:48.417490    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.417490    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:48.420482    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:48.463340    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.463340    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:48.466961    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:48.498101    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.498101    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:48.501771    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:48.532099    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.532099    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:48.532099    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:48.532099    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:48.612165    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:48.602526   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.604459   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.605839   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.608058   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.609316   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:48.602526   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.604459   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.605839   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.608058   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.609316   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:48.612165    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:48.612165    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:48.639467    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:48.639467    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:48.708307    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:48.708378    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:48.769132    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:48.769193    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:51.313991    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:51.338965    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:51.379596    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.379666    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:51.384637    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:51.439084    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.439084    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:51.443082    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:51.481339    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.481375    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:51.485798    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:51.515086    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.515086    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:51.519086    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:51.549657    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.549745    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:51.553762    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:51.594636    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.594636    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:51.601112    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:51.634850    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.634897    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:51.638417    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:51.668658    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.668658    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:51.668658    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:51.668658    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:51.743421    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:51.743421    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:51.785980    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:51.785980    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:51.881612    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:51.875319   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.876439   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.877390   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.878342   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.879220   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:51.875319   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.876439   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.877390   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.878342   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.879220   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:51.881612    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:51.881612    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:51.915211    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:51.915211    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:53.781958    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:54.477323    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:54.503322    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:54.543324    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.543324    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:54.547318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:54.584329    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.584329    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:54.588316    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:54.620313    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.620313    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:54.623313    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:54.656331    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.656331    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:54.662335    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:54.698319    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.698319    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:54.702320    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:54.730323    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.730323    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:54.734335    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:54.767329    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.767329    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:54.772326    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:54.807324    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.807324    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:54.807324    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:54.807324    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:54.885116    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:54.885116    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:54.922078    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:54.922078    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:55.025433    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:55.017851   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.018872   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.019791   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.020881   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.021812   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:55.017851   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.018872   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.019791   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.020881   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.021812   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:55.025433    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:55.025433    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:55.062949    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:55.062949    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:57.627400    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:57.652685    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:57.682605    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.682695    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:57.687397    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:57.715588    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.715643    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:57.719155    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:57.746386    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.746433    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:57.751074    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:57.786162    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.786225    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:57.790161    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:57.821543    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.821543    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:57.825865    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:57.854873    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.854873    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:57.858370    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:57.908764    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.908764    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:57.912923    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:57.943110    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.943156    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:57.943156    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:57.943220    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:58.044764    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:58.032727   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.034310   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.035457   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.038242   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.039578   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:58.032727   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.034310   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.035457   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.038242   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.039578   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:58.044764    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:58.044764    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:58.074136    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:58.074136    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:58.130739    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:58.130739    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:58.198319    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:58.198319    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:00.746286    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:00.773024    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:00.801991    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.801991    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:00.806103    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:00.839474    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.839538    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:00.843748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:00.872704    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.872704    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:00.879471    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:00.910099    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.910099    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:00.913675    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:00.942535    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.942587    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:00.946706    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:00.978075    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.978075    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:00.981585    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:01.010831    1436 logs.go:282] 0 containers: []
	W1210 07:33:01.010862    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:01.014542    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:01.046630    1436 logs.go:282] 0 containers: []
	W1210 07:33:01.046630    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:01.046630    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:01.046630    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:01.110794    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:01.110794    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:01.152129    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:01.152129    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:01.244044    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:01.232452   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.233728   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.234672   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.238201   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.239249   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:01.232452   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.233728   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.234672   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.238201   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.239249   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:01.244044    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:01.244044    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:01.278465    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:01.278465    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:33:03.818627    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:03.833114    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:03.855801    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:03.886510    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.886573    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:03.890099    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:03.920839    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.920839    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:03.927061    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:03.956870    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.956870    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:03.960568    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:03.992698    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.992784    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:03.996483    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:04.027029    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.027149    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:04.030240    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:04.063615    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.063615    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:04.067578    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:04.097874    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.097921    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:04.102194    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:04.133751    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.133751    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:04.133751    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:04.133751    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:04.200457    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:04.200457    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:04.240408    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:04.240408    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:04.321404    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:04.310792   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.311874   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.312796   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.314967   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.316599   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:04.310792   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.311874   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.312796   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.314967   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.316599   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:04.321404    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:04.321404    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:04.348691    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:04.348788    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:06.910838    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:06.942433    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:06.977118    1436 logs.go:282] 0 containers: []
	W1210 07:33:06.977156    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:06.981007    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:07.010984    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.010984    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:07.015418    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:07.044766    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.044766    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:07.048710    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:07.081347    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.081347    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:07.085264    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:07.120524    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.120524    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:07.125158    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:07.162231    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.162231    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:07.167511    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:07.199783    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.199783    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:07.203843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:07.237945    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.237945    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:07.237945    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:07.237945    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:07.303014    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:07.303014    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:07.339790    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:07.339790    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:07.433533    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:07.422610   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.423608   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.426487   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.427518   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.428665   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:07.422610   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.423608   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.426487   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.427518   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.428665   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:07.433578    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:07.433622    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:07.463534    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:07.463534    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:10.019483    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:10.042553    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:10.075861    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.075861    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:10.079883    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:10.112806    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.112855    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:10.118076    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:10.149529    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.149529    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:10.154764    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:10.183943    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.183943    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:10.188277    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:10.225075    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.225109    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:10.229148    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:10.258752    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.258831    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:10.262260    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:10.290375    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.290375    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:10.294114    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:10.324184    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.324184    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:10.324184    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:10.324257    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:10.389060    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:10.389060    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:10.428762    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:10.428762    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:10.512419    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:10.502106   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.503175   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.504155   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.507624   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.508833   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:10.502106   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.503175   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.504155   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.507624   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.508833   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:10.512419    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:10.512419    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:10.539151    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:10.539151    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:13.096376    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:13.120463    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:13.154821    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.154821    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:13.158241    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:13.186136    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.186172    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:13.190126    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:13.217850    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.217850    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:13.220856    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:13.254422    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.254422    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:13.258405    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:13.290565    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.290650    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:13.294141    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:13.324205    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.324205    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:13.327944    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:13.359148    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.359148    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:13.363435    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:13.394783    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.394783    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:13.394783    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:13.394783    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:33:13.858746    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:13.472122    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:13.472122    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:13.512554    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:13.512554    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:13.606866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:13.598440   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.599732   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.600869   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.602040   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.603324   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:13.598440   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.599732   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.600869   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.602040   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.603324   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:13.606866    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:13.606866    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:13.640509    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:13.640509    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:16.200969    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:16.227853    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:16.259466    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.259503    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:16.263863    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:16.305661    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.305714    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:16.309344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:16.349702    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.349702    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:16.354239    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:16.389642    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.389669    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:16.393404    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:16.422749    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.422749    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:16.428043    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:16.462871    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.462871    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:16.466863    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:16.500036    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.500036    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:16.505217    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:16.545533    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.545563    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:16.545563    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:16.545640    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:16.616718    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:16.616718    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:16.662358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:16.662414    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:16.771496    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:16.759784   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.760601   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.764427   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.766044   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.767471   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:16.759784   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.760601   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.764427   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.766044   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.767471   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:16.771539    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:16.771539    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:16.802169    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:16.802169    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:19.361839    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:19.384627    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:19.418054    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.418054    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:19.423334    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:19.449315    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.450326    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:19.453336    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:19.479318    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.479318    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:19.483409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:19.515568    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.515568    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:19.518948    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:19.547403    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.547403    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:19.550914    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:19.582586    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.582643    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:19.586506    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:19.617655    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.617655    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:19.621651    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:19.653692    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.653797    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:19.653820    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:19.653820    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:19.720756    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:19.720756    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:19.788168    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:19.788168    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:19.825175    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:19.825175    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:19.937176    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:19.910996   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912147   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912626   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.933700   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.934740   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:19.910996   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912147   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912626   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.933700   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.934740   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:19.938191    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:19.938191    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:22.472081    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:22.499318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:22.535642    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.535642    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:22.540234    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:22.575580    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.575580    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:22.578579    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:22.611585    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.612584    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:22.615587    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:22.645600    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.645600    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:22.649593    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:22.680588    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.680588    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:22.684584    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:22.713587    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.713587    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:22.716592    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:22.745591    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.745591    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:22.748591    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:22.777133    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.777133    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:22.777133    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:22.777133    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:22.866913    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:22.856823   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.858179   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.859339   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860428   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860817   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:22.856823   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.858179   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.859339   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860428   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860817   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:22.866913    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:22.866913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:22.895817    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:22.895817    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:22.963449    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:22.964449    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:23.024022    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:23.024022    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:33:23.891822    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:25.581257    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:25.606450    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:25.638465    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.638465    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:25.641459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:25.675461    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.675461    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:25.678460    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:25.712472    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.712472    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:25.715460    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:25.742469    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.742469    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:25.745459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:25.778468    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.778468    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:25.782466    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:25.810470    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.810470    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:25.813459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:25.842959    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.843962    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:25.846951    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:25.879265    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.879265    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:25.879265    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:25.879265    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:25.923140    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:25.923140    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:26.006825    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:25.994746   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.996044   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.997646   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.998827   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:26.000023   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:25.994746   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.996044   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.997646   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.998827   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:26.000023   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:26.006825    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:26.006825    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:26.036172    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:26.036172    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:26.088180    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:26.088180    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:28.665087    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:28.689823    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:28.725678    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.725714    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:28.728663    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:28.759105    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.759146    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:28.763209    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:28.794743    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.794743    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:28.798927    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:28.832979    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.832979    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:28.836972    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:28.869676    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.869676    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:28.874394    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:28.909690    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.909690    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:28.914703    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:28.948685    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.948685    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:28.951687    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:28.983688    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.983688    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:28.983688    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:28.983688    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:29.038702    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:29.038702    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:29.102687    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:29.102687    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:29.157695    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:29.157695    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:29.254070    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:29.238189   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.239080   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.244117   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.246991   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.248197   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:29.238189   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.239080   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.244117   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.246991   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.248197   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:29.254070    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:29.254070    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:31.790873    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:31.815324    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:31.848719    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.848719    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:31.853126    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:31.894569    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.894618    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:31.901660    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:31.945924    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.945924    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:31.949930    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:31.980922    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.980922    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:31.983920    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:32.015920    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.015920    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:32.018924    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:32.055014    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.055014    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:32.059907    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:32.088299    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.088299    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:32.091301    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:32.122373    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.122373    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:32.122373    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:32.122373    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:32.200241    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:32.200241    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:32.235857    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:32.236857    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:32.346052    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:32.333533   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.334563   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.336148   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.337055   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.339822   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:32.333533   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.334563   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.336148   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.337055   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.339822   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:32.346052    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:32.346052    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:32.374360    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:32.374360    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:33:33.924414    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:34.931799    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:34.953865    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:34.983147    1436 logs.go:282] 0 containers: []
	W1210 07:33:34.983147    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:34.986833    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:35.017888    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.017888    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:35.021662    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:35.051231    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.051231    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:35.055612    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:35.089316    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.089316    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:35.093193    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:35.121682    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.121682    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:35.126091    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:35.158874    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.158874    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:35.165874    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:35.201117    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.201117    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:35.206353    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:35.236228    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.236228    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:35.236228    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:35.236228    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:35.267932    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:35.267994    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:35.320951    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:35.320951    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:35.383537    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:35.383589    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:35.425468    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:35.425468    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:35.528144    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:35.516901   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.517862   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.520128   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.521034   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.522130   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:35.516901   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.517862   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.520128   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.521034   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.522130   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:38.032492    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:38.054909    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:38.083957    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.083957    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:38.087695    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:38.116008    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.116008    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:38.121353    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:38.151236    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.151236    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:38.157561    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:38.191692    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.191739    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:38.195638    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:38.232952    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.232952    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:38.240283    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:38.267392    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.267392    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:38.270392    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:38.302982    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.302982    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:38.306527    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:38.337370    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.337370    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:38.337663    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:38.337663    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:38.378149    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:38.378149    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:38.496679    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:38.485115   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.486129   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.488286   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.489938   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.491114   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:38.485115   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.486129   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.488286   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.489938   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.491114   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:38.496679    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:38.496679    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:38.523508    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:38.524031    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:38.575827    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:38.575926    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:41.142591    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:41.169193    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:41.202128    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.202197    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:41.205840    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:41.232108    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.232108    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:41.236042    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:41.266240    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.266240    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:41.270256    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:41.299391    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.299914    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:41.305198    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:41.334815    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.334888    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:41.338221    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:41.366830    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.366830    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:41.371846    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:41.403239    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.403307    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:41.406504    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:41.435444    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.435507    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:41.435507    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:41.435507    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:41.495280    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:41.495280    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:41.540098    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:41.540098    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:41.631123    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:41.619480   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.620700   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.621495   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.624397   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.626223   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:41.619480   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.620700   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.621495   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.624397   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.626223   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:41.631123    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:41.631123    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:41.659481    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:41.660004    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:33:43.958857    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:44.218114    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:44.245684    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:44.277948    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.277948    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:44.281784    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:44.308191    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.308236    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:44.311628    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:44.338002    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.338064    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:44.341334    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:44.369051    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.369051    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:44.373446    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:44.401355    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.401355    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:44.404625    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:44.435928    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.436021    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:44.438720    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:44.468518    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.468518    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:44.472419    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:44.505185    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.505185    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:44.505185    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:44.505185    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:44.542000    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:44.542000    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:44.637866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:44.628159   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.629590   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.630676   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.631884   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.633066   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:44.628159   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.629590   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.630676   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.631884   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.633066   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:44.637866    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:44.637866    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:44.668149    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:44.668149    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:44.722118    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:44.722118    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:47.287165    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:47.315701    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:47.348691    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.348691    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:47.352599    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:47.382757    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.382757    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:47.386956    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:47.416756    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.416756    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:47.420505    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:47.447567    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.447631    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:47.451327    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:47.481198    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.481198    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:47.484905    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:47.515752    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.515752    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:47.519521    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:47.549878    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.549878    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:47.553160    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:47.580738    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.580738    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:47.580738    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:47.580738    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:47.620996    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:47.620996    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:47.717751    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:47.708186   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.709887   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.710982   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.712203   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.713847   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:47.708186   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.709887   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.710982   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.712203   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.713847   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:47.717751    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:47.717751    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:47.747052    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:47.747052    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:47.806827    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:47.806907    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:50.374572    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:50.402608    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:50.434845    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.434845    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:50.439264    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:50.472884    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.472884    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:50.476675    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:50.506875    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.506875    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:50.510516    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:50.544104    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.544104    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:50.547823    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:50.582563    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.582563    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:50.586716    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:50.617520    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.617520    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:50.621651    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:50.654870    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.654924    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:50.658739    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:50.687650    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.687650    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:50.687650    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:50.687650    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:50.741903    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:50.741970    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:50.801979    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:50.801979    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:50.841061    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:50.841061    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:50.929313    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:50.919053   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.920402   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.921292   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.924075   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.925956   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:50.919053   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.920402   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.921292   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.924075   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.925956   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:50.929313    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:50.929313    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1210 07:33:53.996838    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:53.461932    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:53.489152    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:53.525676    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.525676    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:53.529484    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:53.564410    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.564438    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:53.567827    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:53.614175    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.614215    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:53.620260    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:53.655138    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.655138    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:53.659487    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:53.692591    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.692591    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:53.696809    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:53.736843    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.736843    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:53.741782    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:53.770910    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.770910    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:53.775145    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:53.805756    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.805756    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:53.805756    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:53.805756    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:53.868923    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:53.868923    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:53.909599    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:53.909599    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:53.994728    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:53.985324   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.986526   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.987499   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.989286   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.991204   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:53.985324   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.986526   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.987499   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.989286   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.991204   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:53.994728    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:53.994728    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:54.023183    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:54.023245    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:56.581055    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:56.606311    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:56.640781    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.640781    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:56.645032    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:56.673780    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.673780    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:56.680498    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:56.708843    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.708843    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:56.711839    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:56.743689    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.743689    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:56.747149    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:56.776428    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.776490    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:56.780173    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:56.810171    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.810171    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:56.815860    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:56.843104    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.843150    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:56.846843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:56.875180    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.875180    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:56.875180    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:56.875260    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:56.937905    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:56.937905    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:56.978984    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:56.978984    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:57.072981    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:57.057332   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.058272   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.063163   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.064098   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.066478   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:57.057332   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.058272   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.063163   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.064098   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.066478   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:57.072981    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:57.072981    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:57.103275    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:57.103275    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:59.657150    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:59.680473    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:59.717538    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.717538    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:59.721115    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:59.750445    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.750445    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:59.754192    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:59.783080    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.783609    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:59.786966    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:59.815381    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.815381    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:59.818634    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:59.846978    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.847073    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:59.850723    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:59.881504    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.881531    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:59.885538    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:59.912091    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.912091    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:59.915555    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:59.945836    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.945836    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:59.945836    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:59.945918    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:00.010932    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:00.010932    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:00.050450    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:00.050450    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:00.135132    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:00.122005   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.123035   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.124724   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.126083   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.127122   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:00.122005   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.123035   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.124724   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.126083   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.127122   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:00.135132    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:00.135132    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:00.162951    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:00.162951    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:02.722322    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:02.747735    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:02.782353    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.782423    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:02.785942    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:02.815562    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.815562    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:02.819580    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:02.851940    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.851940    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:02.855858    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:02.883743    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.883743    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:02.887230    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:02.919540    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.919540    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:02.923123    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:02.951385    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.951439    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:02.955922    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:02.985112    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.985172    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:02.988380    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:03.020559    1436 logs.go:282] 0 containers: []
	W1210 07:34:03.020590    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:03.020590    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:03.020643    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:03.113834    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:03.100874   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.101891   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.104988   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.106378   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.107577   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:03.100874   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.101891   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.104988   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.106378   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.107577   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:03.113834    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:03.113834    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:03.143434    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:03.143494    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:03.195505    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:03.195505    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:03.260582    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:03.260582    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:34:04.034666    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:34:05.805687    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:05.830820    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:05.867098    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.867098    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:05.870201    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:05.902724    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.902724    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:05.906452    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:05.937581    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.937660    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:05.941081    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:05.970812    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.970812    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:05.974826    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:06.005319    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.005319    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:06.009298    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:06.036331    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.036367    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:06.040396    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:06.070470    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.070522    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:06.073716    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:06.105829    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.105902    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:06.105902    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:06.105902    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:06.168761    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:06.168761    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:06.209503    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:06.209503    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:06.300233    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:06.287348   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.288220   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.291316   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.292659   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.293382   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:06.287348   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.288220   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.291316   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.292659   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.293382   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:06.300233    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:06.300233    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:06.325856    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:06.326404    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:34:12.432519    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1210 07:34:12.432519    6044 node_ready.go:38] duration metric: took 6m0.0003472s for node "no-preload-099700" to be "Ready" ...
	I1210 07:34:12.435520    6044 out.go:203] 
	W1210 07:34:12.437521    6044 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:34:12.437521    6044 out.go:285] * 
	W1210 07:34:12.439520    6044 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:34:12.443519    6044 out.go:203] 
	I1210 07:34:08.888339    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:08.915007    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:08.945370    1436 logs.go:282] 0 containers: []
	W1210 07:34:08.945370    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:08.948912    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:08.978717    1436 logs.go:282] 0 containers: []
	W1210 07:34:08.978744    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:08.982191    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:09.014137    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.014137    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:09.019817    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:09.049527    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.049527    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:09.053402    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:09.083494    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.083519    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:09.087029    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:09.115269    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.115306    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:09.117873    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:09.155291    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.155351    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:09.159388    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:09.189238    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.189238    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:09.189238    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:09.189238    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:09.276866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:09.264194   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.265301   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.267799   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269023   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269943   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:09.264194   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.265301   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.267799   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269023   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269943   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:09.276924    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:09.276924    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:09.303083    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:09.303603    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:09.350941    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:09.350941    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:09.414406    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:09.414406    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:11.970539    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:11.997446    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:12.029543    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.029543    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:12.033746    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:12.061992    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.061992    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:12.066520    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:12.095801    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.095801    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:12.099364    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:12.129880    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.129949    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:12.133782    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:12.162555    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.162555    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:12.167228    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:12.196229    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.196229    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:12.200137    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:12.226729    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.226729    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:12.230279    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:12.255730    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.255730    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:12.255730    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:12.255730    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:12.318642    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:12.318642    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:12.364065    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:12.364065    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:12.469524    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:12.459675   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.460966   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.462240   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.463300   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.464338   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:12.459675   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.460966   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.462240   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.463300   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.464338   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:12.469574    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:12.469574    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:12.496807    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:12.496950    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:15.052930    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:15.080623    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:15.117403    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.117403    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:15.120370    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:15.147363    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.148371    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:15.151363    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:15.180365    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.180365    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:15.183366    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:15.215366    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.215366    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:15.218364    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:15.247369    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.247369    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:15.251365    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:15.283373    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.283373    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:15.286369    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:15.314370    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.314370    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:15.317368    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:15.347380    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.347380    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:15.347380    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:15.347380    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:15.421369    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:15.421369    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:15.458368    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:15.458368    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:15.566221    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:15.551230   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.552488   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.553348   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.556086   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.557771   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:15.551230   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.552488   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.553348   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.556086   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.557771   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:15.566279    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:15.566338    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:15.605803    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:15.605803    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:18.163754    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:18.197669    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:18.254543    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.254543    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:18.260541    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:18.293062    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.293062    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:18.296833    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:18.327885    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.327968    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:18.331280    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:18.368942    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.368942    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:18.372299    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:18.400463    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.400463    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:18.405006    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:18.446334    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.446379    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:18.449958    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:18.478295    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.478381    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:18.482123    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:18.510432    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.510506    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:18.510548    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:18.510548    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:18.572862    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:18.572862    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:18.614127    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:18.614127    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:18.702730    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:18.692245   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.693386   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.694454   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.697285   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.699129   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:18.692245   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.693386   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.694454   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.697285   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.699129   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:18.702730    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:18.702730    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:18.729639    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:18.729639    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:21.289931    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:21.315099    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:21.349129    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.349129    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:21.352917    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:21.385897    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.386013    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:21.389207    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:21.439847    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.439847    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:21.444868    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:21.473011    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.473011    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:21.476938    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:21.503941    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.503983    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:21.507954    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:21.536377    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.536377    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:21.540123    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:21.571714    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.571714    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:21.575681    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:21.605581    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.605581    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:21.605581    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:21.605581    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:21.633565    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:21.633565    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:21.687271    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:21.687271    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:21.750102    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:21.750102    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:21.792165    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:21.792165    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:21.885403    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:21.874829   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876021   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876953   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.879461   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.880406   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:21.874829   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876021   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876953   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.879461   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.880406   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:24.393597    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:24.420363    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:24.450891    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.450891    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:24.454037    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:24.483407    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.483407    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:24.489862    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:24.517830    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.517830    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:24.521711    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:24.549403    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.549403    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:24.553551    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:24.580367    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.580367    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:24.584748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:24.612646    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.612646    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:24.616710    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:24.647684    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.647753    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:24.651184    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:24.679053    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.679053    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:24.679053    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:24.679053    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:24.768115    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:24.758247   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.759411   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.760423   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.761390   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.762221   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:24.758247   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.759411   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.760423   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.761390   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.762221   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:24.768115    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:24.768115    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:24.795167    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:24.795201    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:24.844459    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:24.844459    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:24.907171    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:24.907171    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:27.453205    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:27.478026    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:27.513249    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.513249    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:27.517125    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:27.547733    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.547733    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:27.551680    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:27.577736    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.577736    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:27.581469    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:27.612483    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.612483    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:27.616434    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:27.644895    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.644895    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:27.650606    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:27.678273    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.678273    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:27.681744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:27.708604    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.708604    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:27.712244    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:27.742726    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.742726    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:27.742726    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:27.742726    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:27.807570    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:27.807570    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:27.846722    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:27.846722    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:27.929641    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:27.919463   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.920475   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.921726   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.922614   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.924717   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:27.919463   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.920475   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.921726   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.922614   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.924717   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:27.929641    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:27.929641    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:27.956087    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:27.956087    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:30.506646    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:30.530148    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:30.563444    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.563444    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:30.567219    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:30.596843    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.596843    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:30.600803    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:30.628947    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.628947    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:30.632665    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:30.663325    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.663369    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:30.667341    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:30.695640    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.695640    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:30.699545    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:30.728310    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.728310    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:30.731899    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:30.758598    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.758598    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:30.763285    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:30.792051    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.792051    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:30.792051    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:30.792051    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:30.830219    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:30.830219    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:30.919635    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:30.909299   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.910353   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.912393   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.914543   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.915506   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:30.909299   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.910353   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.912393   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.914543   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.915506   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:30.919635    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:30.919635    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:30.949360    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:30.949360    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:30.997435    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:30.997435    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:33.565782    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:33.590543    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:33.623936    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.623936    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:33.629607    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:33.664589    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.664673    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:33.668215    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:33.698892    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.698892    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:33.702344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:33.733428    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.733428    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:33.737226    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:33.764873    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.764873    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:33.768422    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:33.800350    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.800350    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:33.804811    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:33.836711    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.836711    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:33.840164    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:33.869248    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.869333    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:33.869333    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:33.869333    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:33.932626    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:33.933627    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:33.974227    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:33.974227    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:34.066031    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:34.054849   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.056230   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.057835   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.058730   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.060848   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:34.054849   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.056230   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.057835   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.058730   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.060848   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:34.066031    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:34.066031    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:34.092765    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:34.092765    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:36.652871    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:36.677531    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:36.712608    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.712608    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:36.718832    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:36.748298    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.748298    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:36.751762    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:36.783390    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.783403    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:36.787051    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:36.815730    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.815766    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:36.819100    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:36.848875    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.848875    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:36.852925    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:36.886657    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.886657    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:36.890808    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:36.920858    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.920858    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:36.924583    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:36.955882    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.955960    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:36.956001    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:36.956001    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:37.021848    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:37.021848    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:37.060744    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:37.060744    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:37.154895    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:37.142691   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.143928   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.145557   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.147806   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.149984   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:37.142691   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.143928   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.145557   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.147806   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.149984   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:37.154895    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:37.154895    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:37.182385    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:37.182385    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:39.737032    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:39.762115    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:39.792900    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.792900    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:39.797014    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:39.825423    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.825455    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:39.829352    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:39.856679    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.856679    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:39.860615    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:39.891351    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.891351    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:39.895346    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:39.924351    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.924351    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:39.928531    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:39.956447    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.956447    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:39.961810    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:39.987792    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.987792    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:39.991127    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:40.018614    1436 logs.go:282] 0 containers: []
	W1210 07:34:40.018614    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:40.018614    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:40.018614    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:40.082378    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:40.082378    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:40.123506    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:40.123506    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:40.208266    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:40.199944   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201027   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201868   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.204245   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.205189   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:40.199944   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201027   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201868   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.204245   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.205189   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:40.209272    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:40.209272    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:40.239017    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:40.239017    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:42.793527    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:42.818084    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:42.852095    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.852095    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:42.855685    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:42.883269    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.883269    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:42.887287    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:42.918719    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.918800    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:42.923828    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:42.950663    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.950663    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:42.956319    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:42.985991    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.985991    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:42.989729    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:43.017767    1436 logs.go:282] 0 containers: []
	W1210 07:34:43.017824    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:43.021689    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:43.048180    1436 logs.go:282] 0 containers: []
	W1210 07:34:43.048180    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:43.052257    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:43.081092    1436 logs.go:282] 0 containers: []
	W1210 07:34:43.081160    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:43.081183    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:43.081217    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:43.174944    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:43.162932   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.166268   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.169191   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.170321   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.171500   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:43.162932   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.166268   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.169191   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.170321   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.171500   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:43.174992    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:43.174992    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:43.202288    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:43.202807    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:43.249217    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:43.249217    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:43.311267    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:43.311267    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:45.857003    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:45.881743    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:45.911856    1436 logs.go:282] 0 containers: []
	W1210 07:34:45.911856    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:45.915335    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:45.945613    1436 logs.go:282] 0 containers: []
	W1210 07:34:45.945613    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:45.949134    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:45.977768    1436 logs.go:282] 0 containers: []
	W1210 07:34:45.977768    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:45.982182    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:46.010859    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.010859    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:46.014603    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:46.043489    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.043531    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:46.047198    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:46.080651    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.080685    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:46.084319    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:46.116705    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.116780    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:46.121508    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:46.154299    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.154299    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:46.154299    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:46.154299    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:46.222546    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:46.222546    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:46.262468    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:46.262468    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:46.349894    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:46.340418   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.341659   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.342932   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.344391   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.345361   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:46.340418   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.341659   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.342932   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.344391   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.345361   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:46.349894    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:46.349894    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:46.376804    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:46.376804    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:48.931982    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:48.957769    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:48.990182    1436 logs.go:282] 0 containers: []
	W1210 07:34:48.990182    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:48.994255    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:49.021913    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.021913    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:49.026344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:49.054704    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.054704    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:49.058471    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:49.089507    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.089559    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:49.093804    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:49.121462    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.121462    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:49.125755    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:49.156174    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.156174    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:49.160707    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:49.190933    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.190933    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:49.194771    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:49.220610    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.220610    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:49.220610    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:49.220610    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:49.283897    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:49.283897    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:49.324154    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:49.324154    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:49.412165    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:49.404459   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.405604   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.407007   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.408149   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.409161   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:49.404459   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.405604   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.407007   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.408149   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.409161   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:49.412165    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:49.413146    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:49.440045    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:49.440045    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:52.013495    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:52.044149    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:52.080205    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.080205    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:52.084762    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:52.115105    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.115105    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:52.119720    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:52.149672    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.149672    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:52.153985    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:52.186711    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.186711    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:52.192181    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:52.217751    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.217751    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:52.221590    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:52.250827    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.250876    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:52.254668    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:52.284643    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.284643    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:52.288811    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:52.316628    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.316707    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:52.316707    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:52.316707    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:52.348325    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:52.348325    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:52.408110    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:52.408110    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:52.471268    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:52.471268    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:52.511512    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:52.511512    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:52.594976    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:52.587009   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.588398   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.589811   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.591970   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.593048   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:52.587009   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.588398   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.589811   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.591970   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.593048   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:55.100294    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:55.126530    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:55.160945    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.160945    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:55.164755    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:55.196407    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.196407    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:55.199994    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:55.229174    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.229174    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:55.232898    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:55.265856    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.265856    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:55.268892    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:55.302098    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.302121    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:55.305590    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:55.335754    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.335754    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:55.339583    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:55.368170    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.368251    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:55.372008    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:55.397576    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.397576    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:55.397576    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:55.397576    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:55.434345    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:55.434345    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:55.528958    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:55.516781   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.517755   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.519593   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.520640   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.521612   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:55.516781   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.517755   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.519593   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.520640   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.521612   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:55.528958    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:55.528958    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:55.555805    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:55.555805    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:55.602232    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:55.602232    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:58.169858    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:58.195497    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:58.226557    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.226588    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:58.229677    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:58.260817    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.260817    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:58.265378    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:58.293848    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.293920    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:58.297406    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:58.326737    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.326737    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:58.330307    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:58.357319    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.357407    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:58.360727    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:58.392361    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.392405    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:58.395697    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:58.425728    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.425807    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:58.429369    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:58.457816    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.457866    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:58.457866    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:58.457866    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:58.495777    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:58.495777    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:58.585489    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:58.573271   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.574154   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.576361   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.577165   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.579860   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:58.573271   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.574154   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.576361   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.577165   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.579860   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:58.585489    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:58.585489    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:58.613007    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:58.613007    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:58.661382    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:58.661382    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:01.230900    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:01.255356    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:01.292137    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.292190    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:01.297192    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:01.328372    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.328372    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:01.332239    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:01.360635    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.360635    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:01.364529    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:01.391175    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.391175    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:01.394754    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:01.423093    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.423093    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:01.427022    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:01.454965    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.454965    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:01.459137    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:01.487734    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.487734    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:01.492051    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:01.518150    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.518150    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:01.518150    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:01.518150    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:01.580940    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:01.580940    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:01.620363    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:01.620363    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:01.710696    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:01.700163   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.701113   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.703089   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.704462   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.705476   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:01.700163   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.701113   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.703089   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.704462   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.705476   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:01.710696    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:01.710696    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:01.736867    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:01.736867    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:04.295439    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:04.322348    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:04.356895    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.356919    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:04.361858    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:04.396943    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.397019    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:04.401065    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:04.431929    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.431929    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:04.436798    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:04.468073    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.468073    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:04.472528    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:04.503230    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.503230    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:04.506632    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:04.540016    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.540016    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:04.543627    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:04.576446    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.576446    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:04.583292    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:04.611475    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.611542    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:04.611542    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:04.611542    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:04.640376    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:04.640433    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:04.695309    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:04.695309    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:04.756418    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:04.756418    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:04.795089    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:04.795089    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:04.891481    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:04.878108   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.880090   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.883096   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.885167   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.886541   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:04.878108   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.880090   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.883096   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.885167   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.886541   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:07.396688    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:07.422837    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:07.454807    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.454807    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:07.459071    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:07.489720    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.489720    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:07.493466    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:07.519982    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.519982    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:07.523858    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:07.552985    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.552985    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:07.556972    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:07.589709    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.589709    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:07.593709    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:07.621519    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.621519    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:07.625151    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:07.654324    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.654404    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:07.657279    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:07.690913    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.690966    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:07.690988    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:07.690988    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:07.757157    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:07.757157    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:07.796333    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:07.796333    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:07.893954    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:07.881331   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.882766   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.885657   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887077   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887623   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:07.881331   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.882766   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.885657   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887077   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887623   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:07.893954    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:07.893954    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:07.943452    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:07.943452    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:10.496562    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:10.522517    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:10.555517    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.555517    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:10.560160    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:10.591257    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.591306    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:10.594925    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:10.623075    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.623075    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:10.626725    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:10.654115    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.654115    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:10.658014    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:10.689683    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.689683    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:10.693386    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:10.721754    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.721754    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:10.725087    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:10.753052    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.753052    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:10.756926    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:10.787466    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.787466    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:10.787466    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:10.787466    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:10.882563    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:10.873740   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.874902   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.876114   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.877091   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.878349   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:10.873740   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.874902   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.876114   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.877091   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.878349   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:10.882563    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:10.882563    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:10.944299    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:10.944299    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:10.993835    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:10.993835    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:11.053114    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:11.053114    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:13.597304    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:13.621417    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:13.653723    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.653842    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:13.657020    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:13.690175    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.690175    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:13.693954    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:13.723350    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.723350    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:13.728514    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:13.757179    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.757179    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:13.765645    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:13.794387    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.794473    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:13.798130    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:13.826937    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.826937    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:13.830895    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:13.865171    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.865171    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:13.869540    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:13.899920    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.899920    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:13.899920    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:13.899920    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:13.964338    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:13.964338    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:14.028584    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:14.028584    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:14.067840    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:14.067840    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:14.154123    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:14.144490   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.145615   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.146725   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.148037   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.149069   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:14.144490   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.145615   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.146725   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.148037   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.149069   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:14.154123    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:14.154123    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:16.685726    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:16.716822    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:16.753764    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.753827    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:16.757211    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:16.789634    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.789634    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:16.793640    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:16.822677    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.822728    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:16.826522    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:16.853660    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.853660    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:16.858461    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:16.887452    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.887504    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:16.893014    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:16.939344    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.939344    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:16.943118    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:16.971703    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.971781    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:16.974884    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:17.003517    1436 logs.go:282] 0 containers: []
	W1210 07:35:17.003595    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:17.003595    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:17.003595    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:17.088355    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:17.079526   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.080729   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.081812   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.083165   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.084419   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:17.079526   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.080729   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.081812   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.083165   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.084419   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:17.088355    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:17.088355    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:17.117181    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:17.117241    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:17.168070    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:17.168155    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:17.231584    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:17.231584    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:19.776112    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:19.801640    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:19.835886    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.835886    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:19.839626    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:19.872127    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.872127    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:19.876526    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:19.929339    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.929339    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:19.933522    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:19.962400    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.962400    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:19.966133    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:19.994468    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.994544    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:19.998645    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:20.027252    1436 logs.go:282] 0 containers: []
	W1210 07:35:20.027252    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:20.032575    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:20.060153    1436 logs.go:282] 0 containers: []
	W1210 07:35:20.060153    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:20.065171    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:20.091891    1436 logs.go:282] 0 containers: []
	W1210 07:35:20.091891    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:20.091891    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:20.091891    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:20.131103    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:20.131103    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:20.218614    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:20.208033   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.209212   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.210215   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214139   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214965   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:20.208033   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.209212   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.210215   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214139   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214965   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:20.218614    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:20.219146    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:20.245788    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:20.245788    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:20.298111    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:20.298207    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:22.861878    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:22.887649    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:22.922573    1436 logs.go:282] 0 containers: []
	W1210 07:35:22.922573    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:22.926179    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:22.959170    1436 logs.go:282] 0 containers: []
	W1210 07:35:22.959197    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:22.963338    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:22.994510    1436 logs.go:282] 0 containers: []
	W1210 07:35:22.994566    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:22.997861    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:23.029960    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.030036    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:23.033513    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:23.064625    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.064625    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:23.069769    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:23.101906    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.101943    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:23.105651    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:23.136615    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.136615    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:23.140616    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:23.170857    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.170942    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:23.170942    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:23.170942    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:23.233098    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:23.233098    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:23.273238    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:23.273238    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:23.361638    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:23.352696   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.354050   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.356707   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.357782   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.358807   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:23.352696   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.354050   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.356707   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.357782   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.358807   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:23.361638    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:23.361638    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:23.390711    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:23.391230    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:25.949809    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:25.975470    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:26.007496    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.007496    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:26.011469    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:26.044617    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.044617    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:26.048311    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:26.078756    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.078783    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:26.082359    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:26.112113    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.112183    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:26.115713    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:26.148097    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.148097    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:26.151926    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:26.182729    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.182753    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:26.186743    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:26.217219    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.217219    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:26.223773    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:26.251643    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.251713    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:26.251713    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:26.251713    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:26.278698    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:26.278698    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:26.332014    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:26.332014    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:26.394304    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:26.394304    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:26.433073    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:26.433073    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:26.519395    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:26.506069   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.507354   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.509591   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.512516   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.514125   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:26.506069   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.507354   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.509591   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.512516   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.514125   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:29.024398    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:29.049372    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:29.084989    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.085019    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:29.089078    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:29.116420    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.116420    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:29.120531    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:29.149880    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.149880    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:29.153505    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:29.181726    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.181790    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:29.185295    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:29.216713    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.216713    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:29.222568    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:29.249487    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.249487    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:29.253512    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:29.283473    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.283497    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:29.287061    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:29.313225    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.313225    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:29.313225    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:29.313225    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:29.399665    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:29.386954   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.388181   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.390621   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.391811   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.393167   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:29.386954   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.388181   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.390621   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.391811   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.393167   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:29.399665    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:29.399665    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:29.428593    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:29.428593    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:29.477815    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:29.477877    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:29.541874    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:29.541874    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:32.087876    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:32.113456    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:32.145773    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.145805    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:32.149787    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:32.178912    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.178987    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:32.182700    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:32.213301    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.213301    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:32.217129    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:32.246756    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.246824    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:32.250299    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:32.278791    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.278835    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:32.282397    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:32.316208    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.316278    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:32.320233    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:32.349155    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.349155    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:32.352807    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:32.386875    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.386875    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:32.386944    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:32.386944    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:32.479781    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:32.469750   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.470693   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.473307   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.474321   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.475302   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:32.469750   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.470693   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.473307   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.474321   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.475302   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:32.479781    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:32.479781    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:32.506994    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:32.506994    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:32.561757    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:32.561757    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:32.624545    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:32.624545    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:35.176040    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:35.201056    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:35.235735    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.235735    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:35.239655    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:35.267349    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.267416    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:35.270515    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:35.303264    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.303264    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:35.306371    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:35.339037    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.339263    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:35.343297    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:35.375639    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.375639    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:35.379647    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:35.407670    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.407670    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:35.411506    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:35.446240    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.446240    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:35.450265    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:35.477814    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.477814    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:35.477814    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:35.477814    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:35.541174    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:35.541174    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:35.581633    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:35.581633    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:35.673254    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:35.664165   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.665345   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.666190   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.668510   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.669467   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:35.664165   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.665345   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.666190   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.668510   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.669467   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:35.673254    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:35.673254    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:35.701200    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:35.701200    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:38.255869    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:38.281759    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:38.316123    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.316123    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:38.319358    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:38.348903    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.348943    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:38.352900    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:38.381759    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.381795    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:38.385361    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:38.414524    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.414586    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:38.417710    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:38.447131    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.447205    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:38.451100    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:38.479508    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.479543    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:38.483003    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:38.512848    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.512848    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:38.516967    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:38.547680    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.547680    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:38.547680    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:38.547680    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:38.614038    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:38.614038    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:38.658448    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:38.658448    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:38.743054    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:38.733038   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.734073   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.735791   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.738099   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.739595   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:38.733038   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.734073   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.735791   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.738099   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.739595   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:38.743054    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:38.743054    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:38.775152    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:38.775214    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:41.333835    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:41.358081    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:41.393471    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.393471    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:41.396774    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:41.425173    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.425224    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:41.428523    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:41.456663    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.456663    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:41.459654    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:41.490212    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.490212    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:41.493250    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:41.523505    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.523505    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:41.527006    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:41.555529    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.555529    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:41.559605    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:41.590913    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.591011    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:41.596392    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:41.627361    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.627421    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:41.627441    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:41.627538    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:41.692948    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:41.692948    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:41.731909    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:41.731909    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:41.816121    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:41.806508   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.807705   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.808985   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.810306   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.811462   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:41.806508   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.807705   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.808985   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.810306   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.811462   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:41.816121    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:41.816121    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:41.844622    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:41.844622    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:44.401865    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:44.426294    1436 out.go:203] 
	W1210 07:35:44.428631    1436 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1210 07:35:44.428631    1436 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1210 07:35:44.428631    1436 out.go:285] * Related issues:
	W1210 07:35:44.428631    1436 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1210 07:35:44.428631    1436 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1210 07:35:44.430629    1436 out.go:203] 
	
	
	==> Docker <==
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216617054Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216699662Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216710563Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216717064Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216722865Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216746967Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216779770Z" level=info msg="Initializing buildkit"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.379150718Z" level=info msg="Completed buildkit initialization"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.395276092Z" level=info msg="Daemon has completed initialization"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.395426306Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.395462310Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.395512215Z" level=info msg="API listen on [::]:2376"
	Dec 10 07:29:38 newest-cni-525200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 07:29:39 newest-cni-525200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Loaded network plugin cni"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 07:29:39 newest-cni-525200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:56.751510   20065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:56.752615   20065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:56.753792   20065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:56.754617   20065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:56.756894   20065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +7.347496] CPU: 6 PID: 490841 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fe73ddc4b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7fe73ddc4af6.
	[  +0.000000] RSP: 002b:00007ffc57a05a90 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000000] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.867258] CPU: 5 PID: 491006 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f1a7acb4b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f1a7acb4af6.
	[  +0.000001] RSP: 002b:00007ffe19029200 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 07:32] tmpfs: Unknown parameter 'noswap'
	[ +15.541609] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 07:35:56 up  3:04,  0 user,  load average: 2.04, 3.52, 4.31
	Linux newest-cni-525200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:35:53 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:35:54 newest-cni-525200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 10 07:35:54 newest-cni-525200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:54 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:54 newest-cni-525200 kubelet[19871]: E1210 07:35:54.405282   19871 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:35:54 newest-cni-525200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:35:54 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:35:55 newest-cni-525200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 10 07:35:55 newest-cni-525200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:55 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:55 newest-cni-525200 kubelet[19899]: E1210 07:35:55.158951   19899 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:35:55 newest-cni-525200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:35:55 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:35:55 newest-cni-525200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 10 07:35:55 newest-cni-525200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:55 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:55 newest-cni-525200 kubelet[19928]: E1210 07:35:55.896256   19928 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:35:55 newest-cni-525200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:35:55 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:35:56 newest-cni-525200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
	Dec 10 07:35:56 newest-cni-525200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:56 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:56 newest-cni-525200 kubelet[20027]: E1210 07:35:56.647973   20027 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:35:56 newest-cni-525200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:35:56 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-525200 -n newest-cni-525200
E1210 07:35:58.393562   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-525200 -n newest-cni-525200: exit status 2 (603.2291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-525200" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-525200
helpers_test.go:244: (dbg) docker inspect newest-cni-525200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188",
	        "Created": "2025-12-10T07:18:58.277037255Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 463220,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:29:29.73179662Z",
	            "FinishedAt": "2025-12-10T07:29:26.920141661Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188/hostname",
	        "HostsPath": "/var/lib/docker/containers/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188/hosts",
	        "LogPath": "/var/lib/docker/containers/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188/6b7f9063cbda78e0ef38572c57c7867feb8e3be58d41957c78bf670ec281c188-json.log",
	        "Name": "/newest-cni-525200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-525200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-525200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7e31d32294dc0b8b9ba09ebc0adce95d5ecde79d96faff02ddfcca2df2a49118-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7e31d32294dc0b8b9ba09ebc0adce95d5ecde79d96faff02ddfcca2df2a49118/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7e31d32294dc0b8b9ba09ebc0adce95d5ecde79d96faff02ddfcca2df2a49118/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7e31d32294dc0b8b9ba09ebc0adce95d5ecde79d96faff02ddfcca2df2a49118/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-525200",
	                "Source": "/var/lib/docker/volumes/newest-cni-525200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-525200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-525200",
	                "name.minikube.sigs.k8s.io": "newest-cni-525200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c6405ded628bc55282f5002d4bd683ef72ad68a142c14324a7fe852f16eb1d8f",
	            "SandboxKey": "/var/run/docker/netns/c6405ded628b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57760"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57761"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57762"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57763"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57764"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-525200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.121.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:79:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e73cdc5fd1be9396722947f498060ee7b5757251a78043b99e30abfea0ec658b",
	                    "EndpointID": "bf76bc1596f8833f7b9c83f8bb2261128b3871775b4118fe4c99fcdac5e453d3",
	                    "Gateway": "192.168.121.1",
	                    "IPAddress": "192.168.121.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-525200",
	                        "6b7f9063cbda"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-525200 -n newest-cni-525200
E1210 07:35:59.093779   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-525200 -n newest-cni-525200: exit status 2 (615.1534ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-525200 logs -n 25
E1210 07:35:59.676331   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-525200 logs -n 25: (1.4937962s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                          ARGS                                          │        PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-648600 sudo journalctl -xeu kubelet --all --full --no-pager          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/kubernetes/kubelet.conf                         │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /var/lib/kubelet/config.yaml                         │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status docker --all --full --no-pager          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat docker --no-pager                          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/docker/daemon.json                              │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo docker system info                                       │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status cri-docker --all --full --no-pager      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat cri-docker --no-pager                      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /usr/lib/systemd/system/cri-docker.service           │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cri-dockerd --version                                    │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status containerd --all --full --no-pager      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat containerd --no-pager                      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /lib/systemd/system/containerd.service               │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/containerd/config.toml                          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo containerd config dump                                   │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status crio --all --full --no-pager            │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat crio --no-pager                            │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo crio config                                              │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ delete  │ -p custom-flannel-648600                                                               │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ image   │ newest-cni-525200 image list --format=json                                             │ newest-cni-525200     │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ pause   │ -p newest-cni-525200 --alsologtostderr -v=1                                            │ newest-cni-525200     │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ unpause │ -p newest-cni-525200 --alsologtostderr -v=1                                            │ newest-cni-525200     │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:31:27
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:31:27.429465    2240 out.go:360] Setting OutFile to fd 1904 ...
	I1210 07:31:27.483636    2240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:31:27.483636    2240 out.go:374] Setting ErrFile to fd 1148...
	I1210 07:31:27.483636    2240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:31:27.498633    2240 out.go:368] Setting JSON to false
	I1210 07:31:27.500624    2240 start.go:133] hostinfo: {"hostname":"minikube4","uptime":10819,"bootTime":1765341068,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 07:31:27.500624    2240 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 07:31:27.505874    2240 out.go:179] * [custom-flannel-648600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 07:31:27.510785    2240 notify.go:221] Checking for updates...
	I1210 07:31:27.513604    2240 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:31:27.516776    2240 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:31:27.521423    2240 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 07:31:27.524646    2240 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:31:27.526628    2240 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1210 07:31:23.340249    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:31:27.530138    2240 config.go:182] Loaded profile config "false-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:31:27.530637    2240 config.go:182] Loaded profile config "newest-cni-525200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:31:27.530927    2240 config.go:182] Loaded profile config "no-preload-099700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:31:27.531072    2240 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:31:27.674116    2240 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 07:31:27.679999    2240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:31:27.935225    2240 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:31:27.906881904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:31:27.940210    2240 out.go:179] * Using the docker driver based on user configuration
	I1210 07:31:27.947210    2240 start.go:309] selected driver: docker
	I1210 07:31:27.947210    2240 start.go:927] validating driver "docker" against <nil>
	I1210 07:31:27.947210    2240 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:31:28.038927    2240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:31:28.306393    2240 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:31:28.276193336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:31:28.307456    2240 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:31:28.308474    2240 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:31:28.311999    2240 out.go:179] * Using Docker Desktop driver with root privileges
	I1210 07:31:28.314563    2240 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1210 07:31:28.314921    2240 start_flags.go:336] Found "testdata\\kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1210 07:31:28.314921    2240 start.go:353] cluster config:
	{Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:31:28.317704    2240 out.go:179] * Starting "custom-flannel-648600" primary control-plane node in "custom-flannel-648600" cluster
	I1210 07:31:28.318967    2240 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 07:31:28.320981    2240 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:31:23.421229    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:23.421229    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:23.460218    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:23.460218    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:23.544413    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:23.535582    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.536730    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.537358    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.539749    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.540912    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:23.535582    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.536730    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.537358    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.539749    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.540912    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:26.050161    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:26.077105    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:26.111827    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.111827    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:26.116713    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:26.160114    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.160114    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:26.163744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:26.201139    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.201139    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:26.204831    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:26.240411    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.240462    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:26.244533    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:26.280463    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.280463    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:26.285443    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:26.317450    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.317450    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:26.320454    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:26.356058    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.356058    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:26.360642    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:26.406955    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.406994    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:26.407032    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:26.407032    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:26.486801    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:26.486845    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:26.525844    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:26.525844    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:26.629730    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:26.619679    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.620896    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.621633    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623054    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623677    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:26.619679    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.620896    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.621633    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623054    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623677    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:26.630733    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:26.630733    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:26.786973    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:26.786973    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:28.323967    2240 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:31:28.323967    2240 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 07:31:28.370604    2240 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:31:28.410253    2240 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:31:28.410253    2240 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:31:28.586590    2240 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:31:28.586590    2240 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\config.json ...
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:31:28.586590    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\config.json: {Name:mk37135597d0b3e0094e1cb1b5ff50d942db06b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:31:28.587928    2240 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:31:28.587928    2240 start.go:360] acquireMachinesLock for custom-flannel-648600: {Name:mk4a3a34c58cff29c46217d57a91ed79fc9f522b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:28.588459    2240 start.go:364] duration metric: took 531.3µs to acquireMachinesLock for "custom-flannel-648600"
	I1210 07:31:28.588615    2240 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false D
isableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:31:28.588742    2240 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:31:28.592548    2240 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:31:28.593172    2240 start.go:159] libmachine.API.Create for "custom-flannel-648600" (driver="docker")
	I1210 07:31:28.593172    2240 client.go:173] LocalClient.Create starting
	I1210 07:31:28.593172    2240 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1210 07:31:28.594367    2240 main.go:143] libmachine: Decoding PEM data...
	I1210 07:31:28.594367    2240 main.go:143] libmachine: Parsing certificate...
	I1210 07:31:28.594367    2240 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1210 07:31:28.595268    2240 main.go:143] libmachine: Decoding PEM data...
	I1210 07:31:28.595268    2240 main.go:143] libmachine: Parsing certificate...
	I1210 07:31:28.601656    2240 cli_runner.go:164] Run: docker network inspect custom-flannel-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:31:28.702719    2240 cli_runner.go:211] docker network inspect custom-flannel-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:31:28.710721    2240 network_create.go:284] running [docker network inspect custom-flannel-648600] to gather additional debugging logs...
	I1210 07:31:28.710721    2240 cli_runner.go:164] Run: docker network inspect custom-flannel-648600
	W1210 07:31:28.938963    2240 cli_runner.go:211] docker network inspect custom-flannel-648600 returned with exit code 1
	I1210 07:31:28.938963    2240 network_create.go:287] error running [docker network inspect custom-flannel-648600]: docker network inspect custom-flannel-648600: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-648600 not found
	I1210 07:31:28.938963    2240 network_create.go:289] output of [docker network inspect custom-flannel-648600]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-648600 not found
	
	** /stderr **
	I1210 07:31:28.945949    2240 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:31:29.091971    2240 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:29.381586    2240 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:29.465291    2240 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016a8ae0}
	I1210 07:31:29.465291    2240 network_create.go:124] attempt to create docker network custom-flannel-648600 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1210 07:31:29.470056    2240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600
	W1210 07:31:30.046347    2240 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600 returned with exit code 1
	W1210 07:31:30.046347    2240 network_create.go:149] failed to create docker network custom-flannel-648600 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1210 07:31:30.046347    2240 network_create.go:116] failed to create docker network custom-flannel-648600 192.168.67.0/24, will retry: subnet is taken
	I1210 07:31:30.140283    2240 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:30.262644    2240 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017e1d40}
	I1210 07:31:30.262866    2240 network_create.go:124] attempt to create docker network custom-flannel-648600 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 07:31:30.267646    2240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600
	W1210 07:31:30.581811    2240 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600 returned with exit code 1
	W1210 07:31:30.581811    2240 network_create.go:149] failed to create docker network custom-flannel-648600 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1210 07:31:30.581811    2240 network_create.go:116] failed to create docker network custom-flannel-648600 192.168.76.0/24, will retry: subnet is taken
	I1210 07:31:30.621040    2240 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:30.648052    2240 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cde450}
	I1210 07:31:30.648052    2240 network_create.go:124] attempt to create docker network custom-flannel-648600 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1210 07:31:30.656045    2240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600
	I1210 07:31:30.870907    2240 network_create.go:108] docker network custom-flannel-648600 192.168.85.0/24 created
	I1210 07:31:30.870907    2240 kic.go:121] calculated static IP "192.168.85.2" for the "custom-flannel-648600" container
	I1210 07:31:30.881906    2240 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:31:31.006456    2240 cli_runner.go:164] Run: docker volume create custom-flannel-648600 --label name.minikube.sigs.k8s.io=custom-flannel-648600 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:31:31.098467    2240 oci.go:103] Successfully created a docker volume custom-flannel-648600
	I1210 07:31:31.104469    2240 cli_runner.go:164] Run: docker run --rm --name custom-flannel-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-648600 --entrypoint /usr/bin/test -v custom-flannel-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 07:31:31.792496    2240 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.792496    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1210 07:31:31.792496    2240 cache.go:107] acquiring lock: {Name:mk712909f8cebe825fb8a435a44c90c8d2ca7c32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.792496    2240 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.2058554s
	I1210 07:31:31.792496    2240 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1210 07:31:31.792496    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 exists
	I1210 07:31:31.792496    2240 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.34.3" took 3.2053301s
	I1210 07:31:31.792496    2240 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 succeeded
	I1210 07:31:31.794500    2240 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.794500    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1210 07:31:31.794500    2240 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.2078599s
	I1210 07:31:31.795487    2240 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1210 07:31:31.796493    2240 cache.go:107] acquiring lock: {Name:mk4347e5873954a8abe84535c3dbf0018de433f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.796493    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 exists
	I1210 07:31:31.796493    2240 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.34.3" took 3.2098526s
	I1210 07:31:31.796493    2240 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 succeeded
	I1210 07:31:31.809204    2240 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.809204    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1210 07:31:31.809204    2240 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.2225634s
	I1210 07:31:31.809728    2240 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1210 07:31:31.821783    2240 cache.go:107] acquiring lock: {Name:mk2c17fa70a505b9214cef4e32cab64efc0821c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.822582    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 exists
	I1210 07:31:31.822582    2240 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.34.3" took 3.2354164s
	I1210 07:31:31.822582    2240 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 succeeded
	I1210 07:31:31.828690    2240 cache.go:107] acquiring lock: {Name:mk6b392ee3857c8c549222e5d4bc0d459f0d6374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.828690    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 exists
	I1210 07:31:31.828690    2240 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.12.1" took 3.2420491s
	I1210 07:31:31.828690    2240 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 succeeded
	I1210 07:31:31.868175    2240 cache.go:107] acquiring lock: {Name:mkc3ba12d2dc6e754bbb22b72e061ebdaf85fee6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.869189    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 exists
	I1210 07:31:31.869189    2240 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.34.3" took 3.2820228s
	I1210 07:31:31.869189    2240 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 succeeded
	I1210 07:31:31.869189    2240 cache.go:87] Successfully saved all images to host disk.
	I1210 07:31:29.397246    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:29.477876    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:29.605797    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.605797    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:29.612110    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:29.728807    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.728807    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:29.734404    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:29.836328    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.836328    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:29.841346    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:29.932721    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.933712    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:29.938725    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:30.029301    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.029301    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:30.034503    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:30.132157    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.132157    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:30.137284    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:30.276443    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.276443    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:30.284280    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:30.440215    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.440215    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:30.440215    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:30.440215    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:30.586863    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:30.586863    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:30.654056    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:30.654056    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:30.825025    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:30.810195    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.812029    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.813542    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.815272    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.818214    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:30.810195    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.812029    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.813542    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.815272    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.818214    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:30.825083    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:30.825083    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:30.883913    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:30.883913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:32.772569    2240 cli_runner.go:217] Completed: docker run --rm --name custom-flannel-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-648600 --entrypoint /usr/bin/test -v custom-flannel-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.6680738s)
	I1210 07:31:32.772569    2240 oci.go:107] Successfully prepared a docker volume custom-flannel-648600
	I1210 07:31:32.772569    2240 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:31:32.777565    2240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:31:33.023291    2240 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:31:33.001747684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:31:33.027286    2240 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:31:33.264619    2240 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-648600 --name custom-flannel-648600 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-648600 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-648600 --network custom-flannel-648600 --ip 192.168.85.2 --volume custom-flannel-648600:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 07:31:34.003194    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Running}}
	I1210 07:31:34.069196    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:31:34.137196    2240 cli_runner.go:164] Run: docker exec custom-flannel-648600 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:31:34.255530    2240 oci.go:144] the created container "custom-flannel-648600" has a running status.
	I1210 07:31:34.255530    2240 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa...
	I1210 07:31:34.371827    2240 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:31:34.454671    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:31:34.514682    2240 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:31:34.514682    2240 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-648600 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:31:34.665673    2240 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa...
	I1210 07:31:37.044619    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:31:37.095607    2240 machine.go:94] provisionDockerMachine start ...
	I1210 07:31:37.098607    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:37.155601    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:37.171620    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:37.171620    2240 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:31:37.347331    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-648600
	
	I1210 07:31:37.347331    2240 ubuntu.go:182] provisioning hostname "custom-flannel-648600"
	I1210 07:31:37.350327    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:37.408671    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:37.409222    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:37.409222    2240 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-648600 && echo "custom-flannel-648600" | sudo tee /etc/hostname
	W1210 07:31:33.500806    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:31:33.522798    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:33.542801    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:33.574796    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.574796    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:33.577799    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:33.609805    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.609805    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:33.613806    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:33.647528    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.647528    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:33.650525    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:33.682527    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.683531    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:33.686536    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:33.715528    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.715528    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:33.718520    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:33.752522    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.752522    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:33.755526    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:33.789961    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.789961    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:33.794804    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:33.824805    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.824805    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:33.824805    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:33.824805    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:33.908771    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:33.908771    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:33.958763    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:33.958763    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:34.080194    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:34.067865    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.069363    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.070689    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.072220    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.073030    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:34.067865    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.069363    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.070689    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.072220    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.073030    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:34.080194    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:34.080194    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:34.114208    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:34.114208    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:36.683658    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:36.704830    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:36.739690    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.739690    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:36.742694    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:36.772249    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.772249    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:36.776265    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:36.812803    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.812803    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:36.816811    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:36.849259    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.849259    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:36.852518    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:36.890605    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.890605    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:36.895610    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:36.937605    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.937605    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:36.942601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:36.979599    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.979599    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:36.984601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:37.022606    1436 logs.go:282] 0 containers: []
	W1210 07:31:37.022606    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:37.022606    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:37.022606    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:37.086612    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:37.086612    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:37.128602    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:37.128602    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:37.225605    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:37.215773    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.216591    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.218566    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.219365    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.221626    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:37.215773    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.216591    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.218566    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.219365    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.221626    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:37.225605    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:37.225605    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:37.254615    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:37.254615    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:37.617301    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-648600
	
	I1210 07:31:37.621329    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:37.680493    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:37.681514    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:37.681514    2240 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-648600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-648600/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-648600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:31:37.850452    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:31:37.850452    2240 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 07:31:37.850452    2240 ubuntu.go:190] setting up certificates
	I1210 07:31:37.850452    2240 provision.go:84] configureAuth start
	I1210 07:31:37.855263    2240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-648600
	I1210 07:31:37.926854    2240 provision.go:143] copyHostCerts
	I1210 07:31:37.927569    2240 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 07:31:37.927608    2240 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 07:31:37.928059    2240 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 07:31:37.928961    2240 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 07:31:37.928961    2240 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 07:31:37.928961    2240 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 07:31:37.930358    2240 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 07:31:37.930390    2240 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 07:31:37.930744    2240 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 07:31:37.931754    2240 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.custom-flannel-648600 san=[127.0.0.1 192.168.85.2 custom-flannel-648600 localhost minikube]
	I1210 07:31:38.038131    2240 provision.go:177] copyRemoteCerts
	I1210 07:31:38.042277    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:31:38.045314    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.098793    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:38.243502    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:31:38.284050    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I1210 07:31:38.320436    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:31:38.351829    2240 provision.go:87] duration metric: took 501.3694ms to configureAuth
	I1210 07:31:38.351829    2240 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:31:38.352840    2240 config.go:182] Loaded profile config "custom-flannel-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:31:38.355824    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.405824    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:38.405824    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:38.405824    2240 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 07:31:38.582107    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 07:31:38.582107    2240 ubuntu.go:71] root file system type: overlay
	I1210 07:31:38.582107    2240 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 07:31:38.585874    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.646407    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:38.646407    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:38.646407    2240 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 07:31:38.847766    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 07:31:38.852241    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.938899    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:38.938899    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:38.938899    2240 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 07:31:40.711527    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-10 07:31:38.832035101 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1210 07:31:40.711665    2240 machine.go:97] duration metric: took 3.616002s to provisionDockerMachine
	I1210 07:31:40.711665    2240 client.go:176] duration metric: took 12.1183047s to LocalClient.Create
	I1210 07:31:40.711665    2240 start.go:167] duration metric: took 12.1183047s to libmachine.API.Create "custom-flannel-648600"
	I1210 07:31:40.711665    2240 start.go:293] postStartSetup for "custom-flannel-648600" (driver="docker")
	I1210 07:31:40.711665    2240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:31:40.715645    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:31:40.718723    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:40.776513    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:40.917451    2240 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:31:40.923444    2240 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:31:40.923444    2240 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:31:40.923444    2240 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 07:31:40.924452    2240 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 07:31:40.924452    2240 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 07:31:40.929458    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:31:40.942452    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 07:31:40.977491    2240 start.go:296] duration metric: took 265.8211ms for postStartSetup
	I1210 07:31:40.981481    2240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-648600
	I1210 07:31:41.034489    2240 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\config.json ...
	I1210 07:31:41.039496    2240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:31:41.043532    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:41.111672    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:41.255080    2240 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:31:41.269938    2240 start.go:128] duration metric: took 12.6809984s to createHost
	I1210 07:31:41.269938    2240 start.go:83] releasing machines lock for "custom-flannel-648600", held for 12.6812262s
	I1210 07:31:41.273664    2240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-648600
	I1210 07:31:41.324666    2240 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 07:31:41.329678    2240 ssh_runner.go:195] Run: cat /version.json
	I1210 07:31:41.329678    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:41.334670    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:41.381680    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:41.381680    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	W1210 07:31:41.497715    2240 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 07:31:41.501431    2240 ssh_runner.go:195] Run: systemctl --version
	I1210 07:31:41.518880    2240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:31:41.528176    2240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:31:41.531184    2240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:31:41.579185    2240 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 07:31:41.579185    2240 start.go:496] detecting cgroup driver to use...
	I1210 07:31:41.579185    2240 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:31:41.579185    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1210 07:31:41.596178    2240 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 07:31:41.596178    2240 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 07:31:41.606178    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:31:41.626187    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:31:41.641198    2240 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:31:41.645182    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:31:41.668187    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:31:41.687179    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:31:41.706179    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:31:41.724180    2240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:31:41.742180    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:31:41.759185    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:31:41.778184    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:31:41.795180    2240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:31:41.811185    2240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:31:41.828187    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:41.983806    2240 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:31:42.163822    2240 start.go:496] detecting cgroup driver to use...
	I1210 07:31:42.163822    2240 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:31:42.167818    2240 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 07:31:42.193819    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:31:42.216825    2240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:31:42.280833    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:31:42.301820    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:31:42.320823    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:31:42.345832    2240 ssh_runner.go:195] Run: which cri-dockerd
	I1210 07:31:42.358831    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 07:31:42.373835    2240 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 07:31:42.401822    2240 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 07:31:39.808959    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:39.828946    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:39.859949    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.859949    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:39.862944    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:39.896961    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.896961    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:39.901952    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:39.936950    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.936950    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:39.939955    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:39.969949    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.969949    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:39.972954    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:40.002949    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.002949    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:40.006946    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:40.036957    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.036957    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:40.039947    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:40.098959    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.098959    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:40.102955    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:40.149957    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.149957    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:40.149957    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:40.149957    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:40.191850    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:40.192845    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:40.293665    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:40.277190    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283148    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283982    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.286428    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.287149    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:40.277190    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283148    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283982    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.286428    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.287149    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:40.293665    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:40.293665    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:40.325883    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:40.325883    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:40.379885    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:40.379885    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:42.947835    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:42.966833    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:43.000857    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.000857    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:43.003835    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:43.034830    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.034830    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:43.037843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:43.069836    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.069836    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:43.073842    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:43.105424    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.105465    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:43.109492    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:43.143411    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.143411    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:43.147409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:43.179168    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.179168    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:43.183167    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:43.211281    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.211281    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:43.214141    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:43.248141    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.248141    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:43.248141    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:43.248141    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:43.314876    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:43.314876    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:43.357233    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:43.357233    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 07:31:42.551686    2240 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 07:31:42.712827    2240 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 07:31:42.712827    2240 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 07:31:42.735824    2240 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 07:31:42.756828    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:42.906845    2240 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 07:31:43.937123    2240 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0302614s)
	I1210 07:31:43.944887    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:31:43.971819    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 07:31:43.996364    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:31:44.030377    2240 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 07:31:44.173489    2240 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 07:31:44.332105    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:44.483148    2240 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 07:31:44.509404    2240 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 07:31:44.533765    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:44.690011    2240 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 07:31:44.790147    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:31:44.810716    2240 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 07:31:44.813714    2240 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 07:31:44.820719    2240 start.go:564] Will wait 60s for crictl version
	I1210 07:31:44.824717    2240 ssh_runner.go:195] Run: which crictl
	I1210 07:31:44.835701    2240 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:31:44.880457    2240 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 07:31:44.883920    2240 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:31:44.928460    2240 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:31:45.060104    2240 out.go:252] * Preparing Kubernetes v1.34.3 on Docker 29.1.2 ...
	I1210 07:31:45.062900    2240 cli_runner.go:164] Run: docker exec -t custom-flannel-648600 dig +short host.docker.internal
	I1210 07:31:45.193754    2240 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 07:31:45.197851    2240 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 07:31:45.204880    2240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:31:45.225085    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:45.282870    2240 kubeadm.go:884] updating cluster {Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:31:45.283875    2240 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:31:45.286873    2240 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 07:31:45.317078    2240 docker.go:691] Got preloaded images: 
	I1210 07:31:45.317078    2240 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.3 wasn't preloaded
	I1210 07:31:45.317078    2240 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.3 registry.k8s.io/kube-controller-manager:v1.34.3 registry.k8s.io/kube-scheduler:v1.34.3 registry.k8s.io/kube-proxy:v1.34.3 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 07:31:45.330428    2240 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:45.336331    2240 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.341435    2240 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:45.341435    2240 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.347452    2240 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.347452    2240 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.352434    2240 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:45.355426    2240 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.358455    2240 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:45.361429    2240 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.365434    2240 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 07:31:45.366439    2240 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:45.369440    2240 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:45.370428    2240 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:45.374431    2240 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 07:31:45.379430    2240 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	W1210 07:31:45.411422    2240 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.466193    2240 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.518621    2240 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.573883    2240 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.622874    2240 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.672905    2240 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.723034    2240 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.771034    2240 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.12.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 07:31:45.842424    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.842823    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.869734    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.884370    2240 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.3" does not exist at hash "aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c" in container runtime
	I1210 07:31:45.884370    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:31:45.884370    2240 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.3" needs transfer: "registry.k8s.io/kube-proxy:v1.34.3" does not exist at hash "36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691" in container runtime
	I1210 07:31:45.884370    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:31:45.884370    2240 docker.go:338] Removing image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.884370    2240 docker.go:338] Removing image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.890739    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.890951    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.897121    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:45.901151    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:45.922366    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 07:31:45.956325    2240 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.3" does not exist at hash "5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942" in container runtime
	I1210 07:31:45.956325    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:31:45.956325    2240 docker.go:338] Removing image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.961320    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.992754    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:46.045432    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:31:46.045432    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:31:46.053082    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:31:46.053082    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:31:46.059786    2240 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.3" does not exist at hash "aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78" in container runtime
	I1210 07:31:46.059786    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:31:46.059786    2240 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 07:31:46.060783    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:31:46.060783    2240 docker.go:338] Removing image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:46.060783    2240 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:46.065694    2240 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 07:31:46.065694    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:31:46.065694    2240 docker.go:338] Removing image: registry.k8s.io/pause:3.10.1
	I1210 07:31:46.067530    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:46.067911    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:46.068609    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:31:46.070610    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1210 07:31:46.073597    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:46.074603    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:31:46.146816    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.3': No such file or directory
	I1210 07:31:46.146816    2240 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1210 07:31:46.146816    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.3': No such file or directory
	I1210 07:31:46.146816    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:31:46.147805    2240 docker.go:338] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:46.147805    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 --> /var/lib/minikube/images/kube-apiserver_v1.34.3 (27075584 bytes)
	I1210 07:31:46.147805    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 --> /var/lib/minikube/images/kube-proxy_v1.34.3 (25966592 bytes)
	I1210 07:31:46.151807    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:46.255114    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:31:46.255114    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:31:46.261151    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:31:46.262119    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:31:46.272115    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:31:46.272115    2240 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 07:31:46.272115    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:31:46.272115    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.3': No such file or directory
	I1210 07:31:46.272115    2240 docker.go:338] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:46.272115    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 --> /var/lib/minikube/images/kube-controller-manager_v1.34.3 (22830080 bytes)
	I1210 07:31:46.277116    2240 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:46.278121    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 07:31:46.289109    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:31:46.293116    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:31:46.475787    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.3': No such file or directory
	I1210 07:31:46.475787    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 07:31:46.475787    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 07:31:46.475787    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 --> /var/lib/minikube/images/kube-scheduler_v1.34.3 (17393664 bytes)
	I1210 07:31:46.476808    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 07:31:46.476808    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:31:46.476808    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 07:31:46.476808    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1210 07:31:46.476808    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1210 07:31:46.481795    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:31:46.504793    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 07:31:46.504793    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 07:31:46.672791    2240 docker.go:305] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 07:31:46.672791    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1210 07:31:47.172597    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 from cache
	I1210 07:31:47.208589    2240 docker.go:305] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:31:47.208589    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	W1210 07:31:43.531620    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:31:43.451546    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:43.441909    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443033    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443856    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.446062    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.447664    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:43.441909    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443033    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443856    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.446062    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.447664    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:43.452560    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:43.452560    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:43.479539    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:43.479539    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:46.056731    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:46.081601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:46.111531    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.111531    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:46.116512    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:46.149808    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.149808    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:46.155807    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:46.190791    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.190791    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:46.193789    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:46.232109    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.232109    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:46.235109    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:46.269122    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.269122    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:46.273122    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:46.302130    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.302130    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:46.306119    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:46.338110    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.338110    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:46.341114    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:46.370305    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.370305    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:46.370305    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:46.370305    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:46.438787    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:46.438787    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:46.605791    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:46.605791    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:46.756762    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:46.747167    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.748310    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.749473    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.750856    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.751642    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:46.747167    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.748310    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.749473    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.750856    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.751642    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:46.756762    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:46.756762    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:46.793764    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:46.793764    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:48.287161    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load": (1.0785558s)
	I1210 07:31:48.287161    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I1210 07:31:48.287161    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:31:48.287161    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.34.3 | docker load"
	I1210 07:31:51.130300    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.34.3 | docker load": (2.8430943s)
	I1210 07:31:51.130300    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 from cache
	I1210 07:31:51.130300    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:31:51.130300    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.34.3 | docker load"
	I1210 07:31:52.383759    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.34.3 | docker load": (1.2534401s)
	I1210 07:31:52.383759    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 from cache
	I1210 07:31:52.383759    2240 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:31:52.383759    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	I1210 07:31:49.381174    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:49.403703    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:49.436264    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.436317    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:49.440617    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:49.468917    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.468982    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:49.472677    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:49.499977    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.499977    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:49.504116    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:49.536309    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.536350    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:49.540463    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:49.568274    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.568274    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:49.572177    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:49.600130    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.600130    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:49.604000    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:49.632645    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.632645    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:49.636092    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:49.667017    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.667017    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:49.667017    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:49.667017    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:49.705515    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:49.705515    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:49.790780    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:49.782366    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.783658    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.784961    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.786128    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.787161    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:49.782366    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.783658    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.784961    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.786128    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.787161    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:49.790780    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:49.790780    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:49.817781    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:49.817781    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:49.871600    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:49.871674    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:52.448511    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:52.475325    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:52.506360    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.506360    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:52.510172    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:52.540147    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.540147    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:52.544437    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:52.575774    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.575774    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:52.579336    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:52.610061    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.610061    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:52.613342    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:52.642765    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.642765    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:52.649215    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:52.678701    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.678701    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:52.682526    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:52.710203    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.710203    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:52.715870    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:52.745326    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.745351    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:52.745351    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:52.745397    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:52.811401    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:52.811401    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:52.853138    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:52.853138    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:52.968335    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:52.959589    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.960835    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.961833    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.962872    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.963541    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:52.959589    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.960835    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.961833    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.962872    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.963541    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:52.968335    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:52.968335    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:52.995279    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:52.995802    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:55.245680    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (2.8618761s)
	I1210 07:31:55.245680    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1210 07:31:55.246466    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:31:55.246522    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.34.3 | docker load"
	I1210 07:31:56.790187    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.34.3 | docker load": (1.5436405s)
	I1210 07:31:56.790187    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 from cache
	I1210 07:31:56.790187    2240 docker.go:305] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:31:56.790187    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.12.1 | docker load"
	W1210 07:31:53.564945    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:31:55.548093    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:55.571449    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:55.603901    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.603970    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:55.607695    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:55.639065    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.639065    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:55.643536    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:55.671930    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.671930    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:55.675998    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:55.704460    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.704460    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:55.708947    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:55.739257    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.739257    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:55.742852    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:55.772295    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.772344    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:55.776423    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:55.803812    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.803812    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:55.809849    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:55.841586    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.841647    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:55.841647    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:55.841647    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:55.916368    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:55.916368    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:55.958653    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:55.958653    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:56.055702    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:56.041972    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.043936    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.045926    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.047763    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.048444    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:56.041972    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.043936    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.045926    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.047763    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.048444    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:56.055702    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:56.055702    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:56.084883    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:56.084883    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:01.290113    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.12.1 | docker load": (4.4998566s)
	I1210 07:32:01.290113    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 from cache
	I1210 07:32:01.290113    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:32:01.290113    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.34.3 | docker load"
	I1210 07:31:58.642350    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:58.668189    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:58.699633    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.699633    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:58.705036    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:58.738553    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.738553    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:58.742579    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:58.772414    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.772414    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:58.775757    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:58.804872    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.804872    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:58.808509    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:58.835398    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.835398    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:58.843124    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:58.871465    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.871465    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:58.875535    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:58.905029    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.905108    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:58.910324    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:58.953100    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.953100    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:58.953100    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:58.953100    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:59.012946    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:59.012946    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:59.052964    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:59.052964    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:59.146228    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:59.133962    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.135271    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.136467    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.137230    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.139872    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:59.133962    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.135271    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.136467    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.137230    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.139872    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:59.146228    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:59.146228    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:59.173200    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:59.173200    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:01.725170    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:01.746739    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:01.779670    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.779670    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:01.783967    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:01.812617    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.812617    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:01.817482    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:01.848083    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.848083    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:01.852344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:01.883648    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.883648    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:01.887655    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:01.918403    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.918403    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:01.922409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:01.961721    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.961721    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:01.969744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:01.998302    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.998302    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:02.003804    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:02.032315    1436 logs.go:282] 0 containers: []
	W1210 07:32:02.032315    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:02.032315    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:02.032315    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:02.096900    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:02.096900    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:02.136137    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:02.136137    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:02.227732    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:02.216961    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.218197    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.219273    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.220321    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.221438    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:02.216961    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.218197    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.219273    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.220321    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.221438    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:02.227732    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:02.227732    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:02.255236    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:02.255236    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:03.670542    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.34.3 | docker load": (2.3803916s)
	I1210 07:32:03.670542    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 from cache
	I1210 07:32:03.670542    2240 cache_images.go:125] Successfully loaded all cached images
	I1210 07:32:03.670542    2240 cache_images.go:94] duration metric: took 18.3531776s to LoadCachedImages
	I1210 07:32:03.670542    2240 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 docker true true} ...
	I1210 07:32:03.670542    2240 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-648600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml}
	I1210 07:32:03.674057    2240 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 07:32:03.753844    2240 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1210 07:32:03.753844    2240 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:32:03.753844    2240 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-648600 NodeName:custom-flannel-648600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:32:03.753844    2240 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "custom-flannel-648600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:32:03.758233    2240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 07:32:03.772950    2240 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.3': No such file or directory
	
	Initiating transfer...
	I1210 07:32:03.777455    2240 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.3
	I1210 07:32:03.790145    2240 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet.sha256
	I1210 07:32:03.790145    2240 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 07:32:03.790145    2240 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
	I1210 07:32:03.796039    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:32:03.796814    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm
	I1210 07:32:03.796843    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl
	I1210 07:32:03.817843    2240 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubeadm': No such file or directory
	I1210 07:32:03.818011    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubeadm --> /var/lib/minikube/binaries/v1.34.3/kubeadm (74027192 bytes)
	I1210 07:32:03.818298    2240 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubectl': No such file or directory
	I1210 07:32:03.818803    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubectl --> /var/lib/minikube/binaries/v1.34.3/kubectl (60563640 bytes)
	I1210 07:32:03.822978    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet
	I1210 07:32:03.833074    2240 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubelet': No such file or directory
	I1210 07:32:03.833638    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubelet --> /var/lib/minikube/binaries/v1.34.3/kubelet (59203876 bytes)
	I1210 07:32:05.838364    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:32:05.850364    2240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1210 07:32:05.870151    2240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 07:32:05.891336    2240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1210 07:32:05.915010    2240 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:32:05.922767    2240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:32:05.942185    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:32:06.099167    2240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:32:06.121581    2240 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600 for IP: 192.168.85.2
	I1210 07:32:06.121613    2240 certs.go:195] generating shared ca certs ...
	I1210 07:32:06.121640    2240 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.121920    2240 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 07:32:06.122447    2240 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 07:32:06.122578    2240 certs.go:257] generating profile certs ...
	I1210 07:32:06.122578    2240 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.key
	I1210 07:32:06.122578    2240 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.crt with IP's: []
	I1210 07:32:06.321440    2240 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.crt ...
	I1210 07:32:06.321440    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.crt: {Name:mk30a4977cc0d8ffd50678b3c23caa1e53531dd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.322223    2240 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.key ...
	I1210 07:32:06.322223    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.key: {Name:mke10982a653bbe15c8edebf2f43dc216f9268be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.323200    2240 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba
	I1210 07:32:06.323200    2240 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1210 07:32:06.341062    2240 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba ...
	I1210 07:32:06.341062    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba: {Name:mk0e9e825524eecc7aedfd18bb3bfe0b08c0466c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.342014    2240 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba ...
	I1210 07:32:06.342014    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba: {Name:mk42b80e536f4c7e07cd83fa60afbb5af1e6e8b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.342947    2240 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt
	I1210 07:32:06.354920    2240 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key
	I1210 07:32:06.355812    2240 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key
	I1210 07:32:06.355812    2240 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt with IP's: []
	I1210 07:32:06.438517    2240 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt ...
	I1210 07:32:06.438517    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt: {Name:mk49d63357d91f886b5db1adca8a8959ac8a2637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.439596    2240 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key ...
	I1210 07:32:06.439596    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key: {Name:mkd00fe816a16ba7636ee1faff5584095510b505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.454147    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 07:32:06.454968    2240 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 07:32:06.454968    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 07:32:06.455228    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 07:32:06.455417    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 07:32:06.455581    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 07:32:06.455768    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 07:32:06.456703    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:32:06.490234    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:32:06.516382    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:32:06.546895    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:32:06.579157    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1210 07:32:06.611194    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:32:06.642582    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:32:06.673947    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:32:06.702762    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 07:32:06.734932    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 07:32:06.763895    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:32:06.794884    2240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:32:06.824804    2240 ssh_runner.go:195] Run: openssl version
	I1210 07:32:06.839620    2240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.863187    2240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 07:32:06.881235    2240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.889982    2240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.896266    2240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.945361    2240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:32:06.965592    2240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11304.pem /etc/ssl/certs/51391683.0
	I1210 07:32:06.982615    2240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.000345    2240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 07:32:07.019650    2240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.028440    2240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.032681    2240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.080664    2240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:32:07.098781    2240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/113042.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:32:07.119820    2240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.138968    2240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:32:07.157588    2240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.166110    2240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.169123    2240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.218939    2240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:32:07.238245    2240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:32:07.255844    2240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:32:07.263714    2240 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:32:07.263714    2240 kubeadm.go:401] StartCluster: {Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCore
DNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:32:07.267520    2240 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 07:32:07.300048    2240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:32:07.317060    2240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:32:07.333647    2240 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:32:07.337744    2240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:32:07.353638    2240 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:32:07.353638    2240 kubeadm.go:158] found existing configuration files:
	
	I1210 07:32:07.357869    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:32:07.371538    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:32:07.375620    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:32:07.392582    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:32:07.408459    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:32:07.412872    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:32:07.431340    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:32:07.446697    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:32:07.451332    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:32:07.472431    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	W1210 07:32:03.602967    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:04.810034    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:04.838035    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:04.888039    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.888039    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:04.892025    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:04.955032    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.955032    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:04.959038    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:04.995031    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.995031    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:04.999034    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:05.035036    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.035036    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:05.040047    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:05.079034    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.079034    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:05.084038    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:05.123032    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.123032    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:05.128035    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:05.165033    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.165033    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:05.169028    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:05.205183    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.205183    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:05.205183    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:05.205183    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:05.248358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:05.248358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:05.349366    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:05.339165    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.340548    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.341613    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.342697    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.343919    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:05.339165    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.340548    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.341613    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.342697    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.343919    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:05.349366    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:05.349366    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:05.384377    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:05.384377    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:05.439383    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:05.439383    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:08.021198    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:08.045549    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:08.076568    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.076568    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:08.082429    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:08.113514    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.113514    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:08.117280    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:08.145243    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.145243    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:08.151846    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:08.182475    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.182475    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:08.186570    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:08.214500    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.214554    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:08.218698    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:08.250229    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.250229    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:08.254493    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:08.298394    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.298394    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:08.302457    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:08.331561    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.331561    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:08.331561    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:08.331561    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:08.368913    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:08.368913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 07:32:07.487983    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:32:07.492242    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:32:07.510557    2240 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:32:07.626646    2240 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 07:32:07.630270    2240 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1210 07:32:07.725615    2240 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1210 07:32:08.453343    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:08.442562    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.443722    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.445918    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.447466    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.448822    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:08.442562    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.443722    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.445918    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.447466    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.448822    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:08.453378    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:08.453417    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:08.488219    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:08.488219    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:08.533777    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:08.533777    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:11.100898    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:11.123310    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:11.154369    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.154369    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:11.158211    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:11.188349    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.188419    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:11.191999    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:11.218233    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.218263    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:11.222177    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:11.248157    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.248157    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:11.252075    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:11.280934    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.280934    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:11.284871    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:11.316173    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.316225    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:11.320150    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:11.350432    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.350494    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:11.354282    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:11.381767    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.381819    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:11.381819    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:11.381874    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:11.447079    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:11.447079    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:11.485987    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:11.485987    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:11.568313    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:11.555927    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.557482    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.559214    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.561941    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.562681    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:11.555927    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.557482    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.559214    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.561941    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.562681    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:11.568365    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:11.568408    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:11.599474    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:11.599518    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:13.641314    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:14.165429    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:14.189363    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:14.220411    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.220478    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:14.223878    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:14.253748    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.253798    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:14.257409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:14.288235    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.288235    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:14.291689    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:14.323349    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.323349    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:14.326680    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:14.355227    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.355227    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:14.358704    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:14.389648    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.389648    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:14.393032    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:14.424212    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.424212    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:14.427425    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:14.457834    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.457834    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:14.457834    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:14.457834    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:14.486053    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:14.486053    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:14.538138    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:14.538138    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:14.601542    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:14.601542    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:14.638885    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:14.638885    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:14.724482    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:14.715398    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.716451    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.717228    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.719965    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.720942    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:14.715398    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.716451    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.717228    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.719965    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.720942    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:17.229775    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:17.254115    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:17.287113    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.287113    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:17.292389    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:17.321661    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.321661    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:17.325615    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:17.360140    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.360140    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:17.366346    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:17.402963    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.402963    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:17.406830    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:17.436210    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.436210    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:17.440638    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:17.468315    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.468315    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:17.473002    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:17.516057    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.516057    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:17.519835    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:17.546705    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.546705    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:17.546705    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:17.546705    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:17.575272    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:17.575272    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:17.635882    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:17.635882    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:17.702984    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:17.702984    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:17.738444    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:17.738444    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:17.826329    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:17.816355    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.817532    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.818909    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.821510    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.822595    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:17.816355    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.817532    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.818909    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.821510    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.822595    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:20.331491    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:20.356562    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:20.393733    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.393733    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:20.397542    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:20.424969    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.424969    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:20.430097    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:20.461163    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.461163    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:20.464553    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:20.496041    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.496041    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:20.500386    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:20.528481    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.528481    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:20.533192    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:20.563678    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.563678    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:20.567914    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:20.595909    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.595909    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:20.601427    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:20.633125    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.633125    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:20.633125    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:20.633125    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:20.698742    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:20.698742    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:20.738675    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:20.738675    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:20.832925    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:20.823171    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.824395    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.825664    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.826500    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.828755    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:20.823171    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.824395    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.825664    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.826500    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.828755    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:20.833019    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:20.833050    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:20.863741    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:20.863802    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:23.679657    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:23.424742    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:23.449719    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:23.484921    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.484982    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:23.488818    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:23.520632    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.520718    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:23.525648    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:23.557856    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.557856    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:23.561789    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:23.593782    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.593782    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:23.596770    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:23.629689    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.629689    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:23.633972    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:23.677648    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.677648    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:23.681665    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:23.708735    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.708735    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:23.712484    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:23.742324    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.742324    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:23.742324    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:23.742324    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:23.809315    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:23.809315    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:23.849820    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:23.849820    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:23.932812    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:23.923514    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.925667    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.926732    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.928140    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.929123    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:23.923514    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.925667    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.926732    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.928140    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.929123    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:23.932860    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:23.932896    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:23.962977    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:23.962977    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:26.517198    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:26.545066    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:26.577323    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.577323    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:26.581824    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:26.621178    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.621178    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:26.624162    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:26.657711    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.657711    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:26.661872    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:26.690869    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.690869    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:26.693873    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:26.720949    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.720949    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:26.724289    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:26.757254    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.757254    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:26.761433    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:26.788617    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.788617    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:26.792015    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:26.820229    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.820229    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:26.820229    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:26.820229    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:26.886805    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:26.886805    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:26.926531    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:26.926531    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:27.014343    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:27.001829    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.003812    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.004706    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.007231    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.008445    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:27.001829    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.003812    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.004706    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.007231    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.008445    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:27.014420    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:27.014490    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:27.043375    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:27.043375    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:29.223517    2240 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1210 07:32:29.223517    2240 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:32:29.223517    2240 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:32:29.223517    2240 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:32:29.224269    2240 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:32:29.224467    2240 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:32:29.229027    2240 out.go:252]   - Generating certificates and keys ...
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:32:29.229660    2240 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:32:29.229827    2240 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:32:29.229911    2240 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:32:29.229911    2240 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-648600 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 07:32:29.229911    2240 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:32:29.230468    2240 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-648600 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 07:32:29.230658    2240 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:32:29.230768    2240 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:32:29.230900    2240 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:32:29.231503    2240 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:32:29.231582    2240 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:32:29.231582    2240 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:32:29.234181    2240 out.go:252]   - Booting up control plane ...
	I1210 07:32:29.234181    2240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:32:29.234181    2240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:32:29.234181    2240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:32:29.234702    2240 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:32:29.234874    2240 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:32:29.235782    2240 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:32:29.235782    2240 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002366911s
	I1210 07:32:29.235782    2240 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 07:32:29.235782    2240 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1210 07:32:29.236404    2240 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 07:32:29.236404    2240 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 07:32:29.236404    2240 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 12.235267696s
	I1210 07:32:29.236992    2240 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 12.434241439s
	I1210 07:32:29.236992    2240 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 14.5023353s
	I1210 07:32:29.236992    2240 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 07:32:29.236992    2240 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 07:32:29.237590    2240 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 07:32:29.237590    2240 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-648600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 07:32:29.237590    2240 kubeadm.go:319] [bootstrap-token] Using token: a4ld74.20ve6i3rm5ksexxo
	I1210 07:32:29.239648    2240 out.go:252]   - Configuring RBAC rules ...
	I1210 07:32:29.239648    2240 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 07:32:29.240674    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 07:32:29.240944    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 07:32:29.241383    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 07:32:29.241649    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 07:32:29.241668    2240 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 07:32:29.241668    2240 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 07:32:29.241668    2240 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 07:32:29.241668    2240 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 07:32:29.242197    2240 kubeadm.go:319] 
	I1210 07:32:29.242265    2240 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 07:32:29.242265    2240 kubeadm.go:319] 
	I1210 07:32:29.242265    2240 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 07:32:29.242265    2240 kubeadm.go:319] 
	I1210 07:32:29.242265    2240 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 07:32:29.242265    2240 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 07:32:29.242265    2240 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 07:32:29.242265    2240 kubeadm.go:319] 
	I1210 07:32:29.242850    2240 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 07:32:29.242850    2240 kubeadm.go:319] 
	I1210 07:32:29.242850    2240 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 07:32:29.242850    2240 kubeadm.go:319] 
	I1210 07:32:29.242850    2240 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 07:32:29.242850    2240 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 07:32:29.242850    2240 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 07:32:29.242850    2240 kubeadm.go:319] 
	I1210 07:32:29.243436    2240 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 07:32:29.243436    2240 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 07:32:29.243436    2240 kubeadm.go:319] 
	I1210 07:32:29.243436    2240 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token a4ld74.20ve6i3rm5ksexxo \
	I1210 07:32:29.243436    2240 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2501b4ef72a9792bee16677b10e6bb41bd6980e44395cdcd843b1bcd1dba3c3b \
	I1210 07:32:29.243436    2240 kubeadm.go:319] 	--control-plane 
	I1210 07:32:29.243436    2240 kubeadm.go:319] 
	I1210 07:32:29.244018    2240 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 07:32:29.244018    2240 kubeadm.go:319] 
	I1210 07:32:29.244018    2240 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token a4ld74.20ve6i3rm5ksexxo \
	I1210 07:32:29.244018    2240 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2501b4ef72a9792bee16677b10e6bb41bd6980e44395cdcd843b1bcd1dba3c3b 
	I1210 07:32:29.244018    2240 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1210 07:32:29.246745    2240 out.go:179] * Configuring testdata\kube-flannel.yaml (Container Networking Interface) ...
	I1210 07:32:29.266121    2240 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1210 07:32:29.270492    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1210 07:32:29.280075    2240 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1210 07:32:29.280075    2240 ssh_runner.go:362] scp testdata\kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4578 bytes)
	I1210 07:32:29.314572    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 07:32:29.754597    2240 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 07:32:29.759607    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-648600 minikube.k8s.io/updated_at=2025_12_10T07_32_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=custom-flannel-648600 minikube.k8s.io/primary=true
	I1210 07:32:29.759607    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:29.770603    2240 ops.go:34] apiserver oom_adj: -16
	I1210 07:32:29.895974    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:30.395328    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:30.896828    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:31.396414    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:31.896200    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:32.396778    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:29.599594    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:29.627372    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:29.659982    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.659982    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:29.662983    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:29.694702    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.694702    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:29.700318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:29.732602    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.732602    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:29.735594    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:29.769602    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.769602    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:29.773601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:29.805199    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.805199    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:29.808179    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:29.838578    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.838578    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:29.843641    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:29.878051    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.878051    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:29.881052    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:29.921782    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.921782    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:29.921782    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:29.921782    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:29.991328    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:29.991328    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:30.030358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:30.031358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:30.117974    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:30.107541    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.108605    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.109589    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.110674    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.111579    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:30.107541    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.108605    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.109589    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.110674    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.111579    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:30.118027    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:30.118027    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:30.147934    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:30.147934    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:32.704372    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:32.727813    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:32.762114    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.762228    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:32.767248    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:32.801905    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.801968    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:32.805939    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:32.836433    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.836579    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:32.840369    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:32.870265    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.870265    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:32.874049    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:32.904540    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.904540    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:32.908658    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:32.937325    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.937407    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:32.941191    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:32.974829    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.974893    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:32.980307    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:33.012207    1436 logs.go:282] 0 containers: []
	W1210 07:32:33.012268    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:33.012288    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:33.012288    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:33.062151    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:33.062151    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:33.126084    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:33.126084    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:33.164564    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:33.164564    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:33.252175    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:33.238911    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.239860    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.243396    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.244685    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.245447    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:33.238911    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.239860    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.243396    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.244685    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.245447    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:33.252175    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:33.252175    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:32.894984    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:33.397040    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:33.895777    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:34.084987    2240 kubeadm.go:1114] duration metric: took 4.3302518s to wait for elevateKubeSystemPrivileges
	I1210 07:32:34.085013    2240 kubeadm.go:403] duration metric: took 26.8208803s to StartCluster
	I1210 07:32:34.085095    2240 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:34.085299    2240 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:32:34.087295    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:34.088397    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 07:32:34.088397    2240 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:32:34.088932    2240 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:32:34.089115    2240 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-648600"
	I1210 07:32:34.089115    2240 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-648600"
	I1210 07:32:34.089115    2240 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-648600"
	I1210 07:32:34.089115    2240 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-648600"
	I1210 07:32:34.089272    2240 host.go:66] Checking if "custom-flannel-648600" exists ...
	I1210 07:32:34.089454    2240 config.go:182] Loaded profile config "custom-flannel-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:32:34.091048    2240 out.go:179] * Verifying Kubernetes components...
	I1210 07:32:34.099313    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:32:34.100384    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:32:34.101389    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:32:34.165121    2240 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-648600"
	I1210 07:32:34.165121    2240 host.go:66] Checking if "custom-flannel-648600" exists ...
	I1210 07:32:34.166107    2240 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:32:34.174109    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:32:34.177116    2240 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:32:34.177116    2240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:32:34.181109    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:32:34.228110    2240 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:32:34.228110    2240 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:32:34.231111    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:32:34.232110    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:32:34.295102    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:32:34.361698    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 07:32:34.577307    2240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:32:34.743911    2240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:32:34.748484    2240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:32:35.145540    2240 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1210 07:32:35.149854    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:32:35.210514    2240 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-648600" to be "Ready" ...
	I1210 07:32:35.684992    2240 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-648600" context rescaled to 1 replicas
	I1210 07:32:35.860846    2240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1123448s)
	I1210 07:32:35.863841    2240 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1210 07:32:35.869842    2240 addons.go:530] duration metric: took 1.7814171s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1210 07:32:37.217134    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	W1210 07:32:33.712552    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:35.789401    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:35.810140    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:35.846049    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.846049    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:35.850173    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:35.881840    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.881840    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:35.884841    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:35.913190    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.913190    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:35.916698    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:35.953160    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.953160    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:35.956661    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:35.990725    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.990725    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:35.994362    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:36.027153    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.027153    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:36.031157    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:36.060142    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.060142    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:36.063139    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:36.096214    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.096291    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:36.096291    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:36.096291    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:36.136455    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:36.136455    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:36.228827    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:36.215892    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.217040    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.218413    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.219992    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.221006    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:36.215892    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.217040    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.218413    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.219992    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.221006    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:36.228910    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:36.228944    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:36.260979    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:36.261040    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:36.321946    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:36.321946    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:32:39.747934    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	W1210 07:32:42.215582    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	I1210 07:32:38.893525    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:38.918010    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:38.951682    1436 logs.go:282] 0 containers: []
	W1210 07:32:38.951682    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:38.954817    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:38.986714    1436 logs.go:282] 0 containers: []
	W1210 07:32:38.986714    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:38.992805    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:39.024242    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.024242    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:39.028333    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:39.057504    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.057504    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:39.063178    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:39.093362    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.093362    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:39.097488    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:39.130652    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.130690    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:39.133596    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:39.163556    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.163556    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:39.168915    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:39.202587    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.202587    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:39.202587    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:39.202587    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:39.268647    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:39.268647    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:39.308297    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:39.308297    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:39.438181    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:39.395105    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.396191    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.398354    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.399763    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.401460    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:39.395105    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.396191    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.398354    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.399763    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.401460    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:39.438181    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:39.438181    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:39.467128    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:39.467176    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:42.023591    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:42.047765    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:42.080166    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.080166    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:42.084928    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:42.114905    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.114905    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:42.118820    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:42.148212    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.148212    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:42.151728    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:42.182256    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.182256    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:42.185843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:42.216232    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.216276    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:42.219555    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:42.249214    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.249214    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:42.253469    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:42.281977    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.281977    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:42.285971    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:42.313212    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.314210    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:42.314210    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:42.314210    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:42.382226    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:42.382226    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:42.424358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:42.424358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:42.509116    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:42.500360    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.501554    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.503040    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.504307    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.505418    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:42.500360    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.501554    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.503040    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.504307    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.505418    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:42.509116    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:42.509116    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:42.536096    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:42.536096    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:44.217341    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	I1210 07:32:45.217929    2240 node_ready.go:49] node "custom-flannel-648600" is "Ready"
	I1210 07:32:45.217929    2240 node_ready.go:38] duration metric: took 10.0071872s for node "custom-flannel-648600" to be "Ready" ...
	I1210 07:32:45.217929    2240 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:32:45.221913    2240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:45.241224    2240 api_server.go:72] duration metric: took 11.1520714s to wait for apiserver process to appear ...
	I1210 07:32:45.241248    2240 api_server.go:88] waiting for apiserver healthz status ...
	I1210 07:32:45.241297    2240 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58199/healthz ...
	I1210 07:32:45.255531    2240 api_server.go:279] https://127.0.0.1:58199/healthz returned 200:
	ok
	I1210 07:32:45.259632    2240 api_server.go:141] control plane version: v1.34.3
	I1210 07:32:45.259696    2240 api_server.go:131] duration metric: took 18.4479ms to wait for apiserver health ...
	I1210 07:32:45.259716    2240 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 07:32:45.268791    2240 system_pods.go:59] 7 kube-system pods found
	I1210 07:32:45.268849    2240 system_pods.go:61] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.268849    2240 system_pods.go:61] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.268894    2240 system_pods.go:74] duration metric: took 9.14ms to wait for pod list to return data ...
	I1210 07:32:45.268935    2240 default_sa.go:34] waiting for default service account to be created ...
	I1210 07:32:45.273316    2240 default_sa.go:45] found service account: "default"
	I1210 07:32:45.273353    2240 default_sa.go:55] duration metric: took 4.4181ms for default service account to be created ...
	I1210 07:32:45.273353    2240 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 07:32:45.280767    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:45.280945    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.280945    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.281064    2240 retry.go:31] will retry after 250.377545ms: missing components: kube-dns
	I1210 07:32:45.539061    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:45.539616    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.539616    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.539718    2240 retry.go:31] will retry after 289.337772ms: missing components: kube-dns
	I1210 07:32:45.840329    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:45.840329    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.840329    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.840528    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.840528    2240 retry.go:31] will retry after 309.196772ms: missing components: kube-dns
	I1210 07:32:46.157293    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:46.157293    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:46.157293    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:46.157293    2240 retry.go:31] will retry after 407.04525ms: missing components: kube-dns
	I1210 07:32:46.592154    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:46.592265    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:46.592265    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:46.592297    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:46.592297    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:46.592318    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:46.592318    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:46.592318    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:46.592318    2240 retry.go:31] will retry after 495.94184ms: missing components: kube-dns
	I1210 07:32:47.094557    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:47.094557    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:47.094557    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:47.094557    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:47.094557    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:47.095074    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:47.095074    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:47.095074    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Running
	I1210 07:32:47.095074    2240 retry.go:31] will retry after 778.892273ms: missing components: kube-dns
	W1210 07:32:43.745046    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:45.087059    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:45.110662    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:45.142133    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.142133    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:45.146341    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:45.178232    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.178232    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:45.182428    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:45.211507    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.211507    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:45.215400    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:45.245805    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.246346    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:45.251790    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:45.299793    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.299793    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:45.304394    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:45.332689    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.332689    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:45.338438    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:45.371989    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.372039    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:45.376951    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:45.411498    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.411558    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:45.411558    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:45.411617    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:45.488591    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:45.489591    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:45.529135    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:45.529135    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:45.627238    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:45.616715    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.617907    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.618773    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.622470    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.623968    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:45.616715    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.617907    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.618773    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.622470    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.623968    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:45.627238    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:45.627238    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:45.659505    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:45.659505    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:48.224164    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:48.247748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:48.276146    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.276253    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:48.279224    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:48.307561    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.307587    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:48.311247    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:48.342268    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.342268    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:48.346481    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:48.379504    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.379504    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:48.384265    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:47.881744    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:47.881744    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:47.881744    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Running
	I1210 07:32:47.882297    2240 retry.go:31] will retry after 913.098856ms: missing components: kube-dns
	I1210 07:32:48.802046    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:48.802046    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Running
	I1210 07:32:48.802046    2240 system_pods.go:126] duration metric: took 3.5286376s to wait for k8s-apps to be running ...
	I1210 07:32:48.802046    2240 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 07:32:48.807470    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:32:48.825598    2240 system_svc.go:56] duration metric: took 23.5517ms WaitForService to wait for kubelet
	I1210 07:32:48.825598    2240 kubeadm.go:587] duration metric: took 14.7364354s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:32:48.825689    2240 node_conditions.go:102] verifying NodePressure condition ...
	I1210 07:32:48.831503    2240 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1210 07:32:48.831503    2240 node_conditions.go:123] node cpu capacity is 16
	I1210 07:32:48.831503    2240 node_conditions.go:105] duration metric: took 5.8138ms to run NodePressure ...
	I1210 07:32:48.831503    2240 start.go:242] waiting for startup goroutines ...
	I1210 07:32:48.831503    2240 start.go:247] waiting for cluster config update ...
	I1210 07:32:48.831503    2240 start.go:256] writing updated cluster config ...
	I1210 07:32:48.837195    2240 ssh_runner.go:195] Run: rm -f paused
	I1210 07:32:48.844148    2240 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:32:48.853005    2240 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dhgpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.864384    2240 pod_ready.go:94] pod "coredns-66bc5c9577-dhgpj" is "Ready"
	I1210 07:32:48.864472    2240 pod_ready.go:86] duration metric: took 11.4282ms for pod "coredns-66bc5c9577-dhgpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.867887    2240 pod_ready.go:83] waiting for pod "etcd-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.876367    2240 pod_ready.go:94] pod "etcd-custom-flannel-648600" is "Ready"
	I1210 07:32:48.876367    2240 pod_ready.go:86] duration metric: took 8.4794ms for pod "etcd-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.880884    2240 pod_ready.go:83] waiting for pod "kube-apiserver-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.888453    2240 pod_ready.go:94] pod "kube-apiserver-custom-flannel-648600" is "Ready"
	I1210 07:32:48.888453    2240 pod_ready.go:86] duration metric: took 7.5694ms for pod "kube-apiserver-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.891939    2240 pod_ready.go:83] waiting for pod "kube-controller-manager-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:49.254863    2240 pod_ready.go:94] pod "kube-controller-manager-custom-flannel-648600" is "Ready"
	I1210 07:32:49.255015    2240 pod_ready.go:86] duration metric: took 363.0699ms for pod "kube-controller-manager-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:49.454047    2240 pod_ready.go:83] waiting for pod "kube-proxy-vrrgr" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:49.854254    2240 pod_ready.go:94] pod "kube-proxy-vrrgr" is "Ready"
	I1210 07:32:49.854329    2240 pod_ready.go:86] duration metric: took 400.2758ms for pod "kube-proxy-vrrgr" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.054101    2240 pod_ready.go:83] waiting for pod "kube-scheduler-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.453713    2240 pod_ready.go:94] pod "kube-scheduler-custom-flannel-648600" is "Ready"
	I1210 07:32:50.453713    2240 pod_ready.go:86] duration metric: took 399.6056ms for pod "kube-scheduler-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.453713    2240 pod_ready.go:40] duration metric: took 1.6095401s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:32:50.552047    2240 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 07:32:50.555856    2240 out.go:179] * Done! kubectl is now configured to use "custom-flannel-648600" cluster and "default" namespace by default
	I1210 07:32:48.417490    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.417490    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:48.420482    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:48.463340    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.463340    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:48.466961    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:48.498101    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.498101    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:48.501771    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:48.532099    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.532099    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:48.532099    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:48.532099    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:48.612165    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:48.602526   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.604459   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.605839   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.608058   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.609316   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:48.602526   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.604459   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.605839   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.608058   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.609316   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:48.612165    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:48.612165    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:48.639467    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:48.639467    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:48.708307    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:48.708378    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:48.769132    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:48.769193    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:51.313991    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:51.338965    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:51.379596    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.379666    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:51.384637    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:51.439084    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.439084    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:51.443082    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:51.481339    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.481375    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:51.485798    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:51.515086    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.515086    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:51.519086    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:51.549657    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.549745    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:51.553762    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:51.594636    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.594636    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:51.601112    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:51.634850    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.634897    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:51.638417    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:51.668658    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.668658    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:51.668658    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:51.668658    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:51.743421    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:51.743421    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:51.785980    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:51.785980    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:51.881612    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:51.875319   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.876439   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.877390   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.878342   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.879220   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:51.875319   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.876439   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.877390   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.878342   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.879220   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:51.881612    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:51.881612    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:51.915211    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:51.915211    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:53.781958    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:54.477323    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:54.503322    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:54.543324    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.543324    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:54.547318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:54.584329    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.584329    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:54.588316    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:54.620313    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.620313    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:54.623313    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:54.656331    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.656331    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:54.662335    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:54.698319    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.698319    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:54.702320    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:54.730323    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.730323    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:54.734335    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:54.767329    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.767329    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:54.772326    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:54.807324    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.807324    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:54.807324    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:54.807324    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:54.885116    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:54.885116    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:54.922078    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:54.922078    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:55.025433    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:55.017851   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.018872   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.019791   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.020881   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.021812   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:55.017851   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.018872   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.019791   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.020881   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.021812   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:55.025433    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:55.025433    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:55.062949    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:55.062949    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:57.627400    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:57.652685    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:57.682605    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.682695    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:57.687397    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:57.715588    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.715643    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:57.719155    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:57.746386    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.746433    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:57.751074    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:57.786162    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.786225    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:57.790161    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:57.821543    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.821543    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:57.825865    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:57.854873    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.854873    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:57.858370    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:57.908764    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.908764    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:57.912923    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:57.943110    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.943156    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:57.943156    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:57.943220    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:58.044764    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:58.032727   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.034310   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.035457   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.038242   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.039578   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:58.032727   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.034310   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.035457   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.038242   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.039578   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:58.044764    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:58.044764    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:58.074136    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:58.074136    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:58.130739    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:58.130739    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:58.198319    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:58.198319    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:00.746286    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:00.773024    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:00.801991    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.801991    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:00.806103    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:00.839474    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.839538    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:00.843748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:00.872704    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.872704    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:00.879471    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:00.910099    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.910099    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:00.913675    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:00.942535    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.942587    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:00.946706    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:00.978075    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.978075    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:00.981585    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:01.010831    1436 logs.go:282] 0 containers: []
	W1210 07:33:01.010862    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:01.014542    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:01.046630    1436 logs.go:282] 0 containers: []
	W1210 07:33:01.046630    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:01.046630    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:01.046630    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:01.110794    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:01.110794    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:01.152129    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:01.152129    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:01.244044    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:01.232452   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.233728   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.234672   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.238201   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.239249   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:01.232452   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.233728   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.234672   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.238201   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.239249   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:01.244044    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:01.244044    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:01.278465    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:01.278465    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:33:03.818627    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:03.833114    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:03.855801    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:03.886510    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.886573    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:03.890099    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:03.920839    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.920839    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:03.927061    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:03.956870    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.956870    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:03.960568    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:03.992698    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.992784    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:03.996483    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:04.027029    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.027149    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:04.030240    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:04.063615    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.063615    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:04.067578    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:04.097874    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.097921    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:04.102194    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:04.133751    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.133751    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:04.133751    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:04.133751    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:04.200457    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:04.200457    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:04.240408    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:04.240408    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:04.321404    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:04.310792   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.311874   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.312796   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.314967   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.316599   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:04.310792   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.311874   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.312796   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.314967   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.316599   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:04.321404    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:04.321404    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:04.348691    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:04.348788    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:06.910838    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:06.942433    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:06.977118    1436 logs.go:282] 0 containers: []
	W1210 07:33:06.977156    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:06.981007    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:07.010984    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.010984    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:07.015418    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:07.044766    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.044766    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:07.048710    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:07.081347    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.081347    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:07.085264    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:07.120524    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.120524    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:07.125158    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:07.162231    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.162231    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:07.167511    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:07.199783    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.199783    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:07.203843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:07.237945    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.237945    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:07.237945    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:07.237945    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:07.303014    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:07.303014    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:07.339790    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:07.339790    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:07.433533    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:07.422610   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.423608   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.426487   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.427518   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.428665   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:07.422610   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.423608   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.426487   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.427518   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.428665   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:07.433578    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:07.433622    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:07.463534    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:07.463534    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:10.019483    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:10.042553    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:10.075861    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.075861    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:10.079883    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:10.112806    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.112855    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:10.118076    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:10.149529    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.149529    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:10.154764    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:10.183943    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.183943    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:10.188277    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:10.225075    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.225109    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:10.229148    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:10.258752    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.258831    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:10.262260    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:10.290375    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.290375    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:10.294114    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:10.324184    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.324184    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:10.324184    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:10.324257    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:10.389060    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:10.389060    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:10.428762    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:10.428762    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:10.512419    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:10.502106   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.503175   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.504155   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.507624   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.508833   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:10.502106   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.503175   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.504155   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.507624   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.508833   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:10.512419    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:10.512419    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:10.539151    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:10.539151    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:13.096376    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:13.120463    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:13.154821    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.154821    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:13.158241    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:13.186136    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.186172    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:13.190126    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:13.217850    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.217850    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:13.220856    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:13.254422    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.254422    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:13.258405    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:13.290565    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.290650    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:13.294141    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:13.324205    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.324205    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:13.327944    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:13.359148    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.359148    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:13.363435    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:13.394783    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.394783    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:13.394783    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:13.394783    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:33:13.858746    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:13.472122    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:13.472122    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:13.512554    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:13.512554    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:13.606866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:13.598440   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.599732   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.600869   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.602040   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.603324   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:13.598440   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.599732   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.600869   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.602040   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.603324   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:13.606866    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:13.606866    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:13.640509    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:13.640509    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:16.200969    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:16.227853    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:16.259466    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.259503    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:16.263863    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:16.305661    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.305714    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:16.309344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:16.349702    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.349702    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:16.354239    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:16.389642    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.389669    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:16.393404    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:16.422749    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.422749    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:16.428043    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:16.462871    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.462871    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:16.466863    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:16.500036    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.500036    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:16.505217    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:16.545533    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.545563    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:16.545563    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:16.545640    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:16.616718    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:16.616718    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:16.662358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:16.662414    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:16.771496    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:16.759784   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.760601   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.764427   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.766044   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.767471   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:16.759784   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.760601   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.764427   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.766044   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.767471   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:16.771539    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:16.771539    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:16.802169    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:16.802169    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:19.361839    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:19.384627    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:19.418054    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.418054    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:19.423334    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:19.449315    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.450326    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:19.453336    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:19.479318    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.479318    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:19.483409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:19.515568    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.515568    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:19.518948    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:19.547403    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.547403    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:19.550914    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:19.582586    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.582643    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:19.586506    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:19.617655    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.617655    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:19.621651    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:19.653692    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.653797    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:19.653820    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:19.653820    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:19.720756    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:19.720756    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:19.788168    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:19.788168    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:19.825175    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:19.825175    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:19.937176    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:19.910996   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912147   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912626   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.933700   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.934740   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:19.910996   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912147   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912626   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.933700   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.934740   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:19.938191    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:19.938191    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:22.472081    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:22.499318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:22.535642    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.535642    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:22.540234    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:22.575580    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.575580    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:22.578579    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:22.611585    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.612584    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:22.615587    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:22.645600    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.645600    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:22.649593    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:22.680588    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.680588    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:22.684584    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:22.713587    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.713587    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:22.716592    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:22.745591    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.745591    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:22.748591    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:22.777133    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.777133    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:22.777133    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:22.777133    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:22.866913    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:22.856823   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.858179   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.859339   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860428   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860817   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:22.856823   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.858179   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.859339   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860428   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860817   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:22.866913    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:22.866913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:22.895817    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:22.895817    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:22.963449    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:22.964449    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:23.024022    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:23.024022    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:33:23.891822    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:25.581257    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:25.606450    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:25.638465    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.638465    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:25.641459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:25.675461    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.675461    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:25.678460    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:25.712472    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.712472    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:25.715460    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:25.742469    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.742469    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:25.745459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:25.778468    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.778468    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:25.782466    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:25.810470    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.810470    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:25.813459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:25.842959    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.843962    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:25.846951    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:25.879265    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.879265    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:25.879265    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:25.879265    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:25.923140    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:25.923140    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:26.006825    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:25.994746   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.996044   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.997646   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.998827   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:26.000023   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:25.994746   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.996044   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.997646   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.998827   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:26.000023   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:26.006825    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:26.006825    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:26.036172    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:26.036172    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:26.088180    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:26.088180    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:28.665087    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:28.689823    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:28.725678    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.725714    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:28.728663    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:28.759105    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.759146    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:28.763209    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:28.794743    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.794743    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:28.798927    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:28.832979    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.832979    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:28.836972    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:28.869676    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.869676    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:28.874394    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:28.909690    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.909690    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:28.914703    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:28.948685    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.948685    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:28.951687    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:28.983688    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.983688    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:28.983688    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:28.983688    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:29.038702    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:29.038702    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:29.102687    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:29.102687    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:29.157695    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:29.157695    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:29.254070    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:29.238189   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.239080   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.244117   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.246991   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.248197   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:29.238189   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.239080   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.244117   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.246991   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.248197   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:29.254070    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:29.254070    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:31.790873    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:31.815324    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:31.848719    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.848719    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:31.853126    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:31.894569    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.894618    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:31.901660    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:31.945924    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.945924    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:31.949930    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:31.980922    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.980922    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:31.983920    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:32.015920    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.015920    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:32.018924    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:32.055014    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.055014    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:32.059907    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:32.088299    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.088299    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:32.091301    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:32.122373    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.122373    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:32.122373    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:32.122373    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:32.200241    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:32.200241    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:32.235857    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:32.236857    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:32.346052    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:32.333533   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.334563   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.336148   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.337055   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.339822   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:32.333533   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.334563   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.336148   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.337055   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.339822   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:32.346052    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:32.346052    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:32.374360    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:32.374360    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:33:33.924414    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:34.931799    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:34.953865    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:34.983147    1436 logs.go:282] 0 containers: []
	W1210 07:33:34.983147    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:34.986833    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:35.017888    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.017888    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:35.021662    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:35.051231    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.051231    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:35.055612    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:35.089316    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.089316    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:35.093193    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:35.121682    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.121682    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:35.126091    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:35.158874    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.158874    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:35.165874    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:35.201117    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.201117    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:35.206353    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:35.236228    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.236228    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:35.236228    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:35.236228    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:35.267932    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:35.267994    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:35.320951    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:35.320951    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:35.383537    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:35.383589    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:35.425468    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:35.425468    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:35.528144    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:35.516901   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.517862   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.520128   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.521034   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.522130   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:35.516901   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.517862   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.520128   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.521034   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.522130   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:38.032492    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:38.054909    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:38.083957    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.083957    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:38.087695    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:38.116008    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.116008    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:38.121353    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:38.151236    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.151236    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:38.157561    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:38.191692    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.191739    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:38.195638    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:38.232952    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.232952    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:38.240283    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:38.267392    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.267392    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:38.270392    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:38.302982    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.302982    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:38.306527    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:38.337370    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.337370    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:38.337663    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:38.337663    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:38.378149    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:38.378149    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:38.496679    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:38.485115   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.486129   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.488286   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.489938   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.491114   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:38.485115   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.486129   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.488286   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.489938   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.491114   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:38.496679    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:38.496679    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:38.523508    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:38.524031    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:38.575827    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:38.575926    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:41.142591    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:41.169193    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:41.202128    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.202197    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:41.205840    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:41.232108    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.232108    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:41.236042    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:41.266240    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.266240    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:41.270256    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:41.299391    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.299914    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:41.305198    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:41.334815    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.334888    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:41.338221    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:41.366830    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.366830    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:41.371846    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:41.403239    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.403307    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:41.406504    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:41.435444    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.435507    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:41.435507    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:41.435507    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:41.495280    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:41.495280    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:41.540098    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:41.540098    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:41.631123    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:41.619480   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.620700   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.621495   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.624397   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.626223   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:41.619480   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.620700   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.621495   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.624397   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.626223   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:41.631123    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:41.631123    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:41.659481    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:41.660004    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:33:43.958857    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:44.218114    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:44.245684    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:44.277948    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.277948    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:44.281784    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:44.308191    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.308236    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:44.311628    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:44.338002    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.338064    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:44.341334    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:44.369051    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.369051    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:44.373446    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:44.401355    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.401355    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:44.404625    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:44.435928    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.436021    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:44.438720    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:44.468518    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.468518    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:44.472419    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:44.505185    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.505185    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:44.505185    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:44.505185    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:44.542000    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:44.542000    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:44.637866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:44.628159   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.629590   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.630676   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.631884   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.633066   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:44.628159   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.629590   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.630676   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.631884   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.633066   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:44.637866    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:44.637866    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:44.668149    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:44.668149    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:44.722118    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:44.722118    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:47.287165    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:47.315701    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:47.348691    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.348691    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:47.352599    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:47.382757    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.382757    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:47.386956    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:47.416756    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.416756    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:47.420505    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:47.447567    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.447631    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:47.451327    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:47.481198    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.481198    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:47.484905    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:47.515752    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.515752    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:47.519521    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:47.549878    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.549878    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:47.553160    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:47.580738    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.580738    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:47.580738    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:47.580738    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:47.620996    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:47.620996    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:47.717751    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:47.708186   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.709887   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.710982   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.712203   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
E1210 07:36:02.238896   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
	E1210 07:33:47.713847   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:47.708186   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.709887   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.710982   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.712203   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.713847   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:47.717751    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:47.717751    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:47.747052    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:47.747052    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:47.806827    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:47.806907    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:50.374572    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:50.402608    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:50.434845    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.434845    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:50.439264    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:50.472884    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.472884    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:50.476675    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:50.506875    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.506875    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:50.510516    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:50.544104    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.544104    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:50.547823    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:50.582563    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.582563    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:50.586716    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:50.617520    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.617520    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:50.621651    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:50.654870    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.654924    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:50.658739    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:50.687650    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.687650    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:50.687650    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:50.687650    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:50.741903    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:50.741970    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:50.801979    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:50.801979    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:50.841061    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:50.841061    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:50.929313    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:50.919053   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.920402   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.921292   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.924075   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.925956   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:50.919053   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.920402   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.921292   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.924075   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.925956   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:50.929313    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:50.929313    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1210 07:33:53.996838    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:53.461932    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:53.489152    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:53.525676    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.525676    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:53.529484    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:53.564410    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.564438    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:53.567827    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:53.614175    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.614215    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:53.620260    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:53.655138    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.655138    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:53.659487    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:53.692591    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.692591    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:53.696809    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:53.736843    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.736843    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:53.741782    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:53.770910    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.770910    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:53.775145    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:53.805756    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.805756    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:53.805756    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:53.805756    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:53.868923    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:53.868923    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:53.909599    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:53.909599    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:53.994728    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:53.985324   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.986526   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.987499   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.989286   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.991204   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:53.985324   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.986526   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.987499   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.989286   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.991204   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:53.994728    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:53.994728    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:54.023183    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:54.023245    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:56.581055    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:56.606311    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:56.640781    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.640781    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:56.645032    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:56.673780    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.673780    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:56.680498    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:56.708843    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.708843    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:56.711839    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:56.743689    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.743689    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:56.747149    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:56.776428    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.776490    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:56.780173    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:56.810171    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.810171    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:56.815860    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:56.843104    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.843150    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:56.846843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:56.875180    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.875180    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:56.875180    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:56.875260    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:56.937905    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:56.937905    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:56.978984    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:56.978984    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:57.072981    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:57.057332   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.058272   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.063163   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.064098   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.066478   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:57.057332   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.058272   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.063163   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.064098   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.066478   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:57.072981    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:57.072981    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:57.103275    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:57.103275    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:59.657150    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:59.680473    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:59.717538    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.717538    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:59.721115    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:59.750445    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.750445    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:59.754192    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:59.783080    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.783609    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:59.786966    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:59.815381    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.815381    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:59.818634    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:59.846978    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.847073    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:59.850723    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:59.881504    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.881531    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:59.885538    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:59.912091    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.912091    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:59.915555    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:59.945836    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.945836    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:59.945836    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:59.945918    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:00.010932    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:00.010932    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:00.050450    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:00.050450    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:00.135132    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:00.122005   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.123035   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.124724   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.126083   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.127122   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:00.122005   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.123035   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.124724   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.126083   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.127122   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:00.135132    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:00.135132    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:00.162951    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:00.162951    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:02.722322    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:02.747735    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:02.782353    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.782423    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:02.785942    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:02.815562    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.815562    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:02.819580    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:02.851940    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.851940    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:02.855858    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:02.883743    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.883743    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:02.887230    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:02.919540    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.919540    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:02.923123    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:02.951385    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.951439    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:02.955922    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:02.985112    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.985172    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:02.988380    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:03.020559    1436 logs.go:282] 0 containers: []
	W1210 07:34:03.020590    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:03.020590    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:03.020643    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:03.113834    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:03.100874   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.101891   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.104988   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.106378   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.107577   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:03.100874   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.101891   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.104988   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.106378   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.107577   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:03.113834    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:03.113834    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:03.143434    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:03.143494    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:03.195505    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:03.195505    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:03.260582    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:03.260582    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:34:04.034666    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:34:05.805687    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:05.830820    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:05.867098    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.867098    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:05.870201    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:05.902724    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.902724    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:05.906452    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:05.937581    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.937660    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:05.941081    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:05.970812    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.970812    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:05.974826    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:06.005319    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.005319    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:06.009298    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:06.036331    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.036367    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:06.040396    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:06.070470    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.070522    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:06.073716    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:06.105829    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.105902    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:06.105902    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:06.105902    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:06.168761    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:06.168761    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:06.209503    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:06.209503    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:06.300233    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:06.287348   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.288220   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.291316   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.292659   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.293382   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:06.287348   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.288220   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.291316   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.292659   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.293382   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:06.300233    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:06.300233    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:06.325856    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:06.326404    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:34:12.432519    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1210 07:34:12.432519    6044 node_ready.go:38] duration metric: took 6m0.0003472s for node "no-preload-099700" to be "Ready" ...
	I1210 07:34:12.435520    6044 out.go:203] 
	W1210 07:34:12.437521    6044 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:34:12.437521    6044 out.go:285] * 
	W1210 07:34:12.439520    6044 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:34:12.443519    6044 out.go:203] 
	I1210 07:34:08.888339    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:08.915007    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:08.945370    1436 logs.go:282] 0 containers: []
	W1210 07:34:08.945370    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:08.948912    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:08.978717    1436 logs.go:282] 0 containers: []
	W1210 07:34:08.978744    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:08.982191    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:09.014137    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.014137    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:09.019817    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:09.049527    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.049527    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:09.053402    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:09.083494    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.083519    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:09.087029    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:09.115269    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.115306    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:09.117873    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:09.155291    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.155351    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:09.159388    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:09.189238    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.189238    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:09.189238    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:09.189238    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:09.276866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:09.264194   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.265301   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.267799   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269023   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269943   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:09.264194   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.265301   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.267799   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269023   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269943   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:09.276924    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:09.276924    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:09.303083    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:09.303603    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:09.350941    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:09.350941    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:09.414406    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:09.414406    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:11.970539    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:11.997446    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:12.029543    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.029543    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:12.033746    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:12.061992    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.061992    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:12.066520    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:12.095801    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.095801    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:12.099364    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:12.129880    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.129949    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:12.133782    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:12.162555    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.162555    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:12.167228    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:12.196229    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.196229    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:12.200137    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:12.226729    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.226729    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:12.230279    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:12.255730    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.255730    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:12.255730    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:12.255730    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:12.318642    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:12.318642    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:12.364065    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:12.364065    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:12.469524    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:12.459675   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.460966   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.462240   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.463300   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.464338   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:12.459675   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.460966   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.462240   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.463300   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.464338   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:12.469574    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:12.469574    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:12.496807    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:12.496950    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:15.052930    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:15.080623    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:15.117403    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.117403    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:15.120370    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:15.147363    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.148371    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:15.151363    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:15.180365    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.180365    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:15.183366    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:15.215366    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.215366    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:15.218364    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:15.247369    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.247369    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:15.251365    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:15.283373    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.283373    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:15.286369    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:15.314370    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.314370    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:15.317368    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:15.347380    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.347380    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:15.347380    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:15.347380    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:15.421369    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:15.421369    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:15.458368    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:15.458368    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:15.566221    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:15.551230   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.552488   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.553348   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.556086   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.557771   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:15.551230   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.552488   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.553348   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.556086   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.557771   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:15.566279    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:15.566338    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:15.605803    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:15.605803    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:18.163754    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:18.197669    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:18.254543    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.254543    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:18.260541    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:18.293062    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.293062    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:18.296833    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:18.327885    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.327968    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:18.331280    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:18.368942    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.368942    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:18.372299    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:18.400463    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.400463    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:18.405006    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:18.446334    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.446379    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:18.449958    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:18.478295    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.478381    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:18.482123    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:18.510432    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.510506    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:18.510548    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:18.510548    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:18.572862    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:18.572862    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:18.614127    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:18.614127    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:18.702730    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:18.692245   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.693386   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.694454   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.697285   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.699129   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:18.692245   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.693386   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.694454   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.697285   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.699129   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:18.702730    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:18.702730    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:18.729639    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:18.729639    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:21.289931    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:21.315099    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:21.349129    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.349129    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:21.352917    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:21.385897    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.386013    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:21.389207    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:21.439847    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.439847    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:21.444868    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:21.473011    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.473011    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:21.476938    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:21.503941    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.503983    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:21.507954    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:21.536377    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.536377    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:21.540123    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:21.571714    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.571714    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:21.575681    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:21.605581    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.605581    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:21.605581    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:21.605581    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:21.633565    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:21.633565    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:21.687271    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:21.687271    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:21.750102    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:21.750102    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:21.792165    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:21.792165    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:21.885403    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:21.874829   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876021   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876953   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.879461   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.880406   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:21.874829   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876021   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876953   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.879461   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.880406   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:24.393597    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:24.420363    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:24.450891    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.450891    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:24.454037    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:24.483407    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.483407    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:24.489862    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:24.517830    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.517830    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:24.521711    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:24.549403    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.549403    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:24.553551    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:24.580367    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.580367    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:24.584748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:24.612646    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.612646    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:24.616710    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:24.647684    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.647753    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:24.651184    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:24.679053    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.679053    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:24.679053    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:24.679053    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:24.768115    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:24.758247   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.759411   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.760423   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.761390   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.762221   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:24.758247   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.759411   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.760423   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.761390   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.762221   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:24.768115    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:24.768115    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:24.795167    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:24.795201    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:24.844459    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:24.844459    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:24.907171    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:24.907171    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:27.453205    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:27.478026    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:27.513249    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.513249    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:27.517125    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:27.547733    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.547733    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:27.551680    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:27.577736    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.577736    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:27.581469    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:27.612483    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.612483    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:27.616434    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:27.644895    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.644895    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:27.650606    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:27.678273    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.678273    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:27.681744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:27.708604    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.708604    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:27.712244    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:27.742726    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.742726    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:27.742726    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:27.742726    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:27.807570    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:27.807570    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:27.846722    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:27.846722    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:27.929641    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:27.919463   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.920475   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.921726   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.922614   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.924717   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:27.919463   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.920475   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.921726   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.922614   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.924717   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:27.929641    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:27.929641    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:27.956087    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:27.956087    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:30.506646    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:30.530148    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:30.563444    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.563444    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:30.567219    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:30.596843    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.596843    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:30.600803    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:30.628947    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.628947    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:30.632665    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:30.663325    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.663369    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:30.667341    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:30.695640    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.695640    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:30.699545    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:30.728310    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.728310    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:30.731899    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:30.758598    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.758598    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:30.763285    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:30.792051    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.792051    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:30.792051    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:30.792051    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:30.830219    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:30.830219    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:30.919635    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:30.909299   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.910353   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.912393   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.914543   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.915506   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:30.909299   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.910353   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.912393   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.914543   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.915506   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:30.919635    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:30.919635    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:30.949360    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:30.949360    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:30.997435    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:30.997435    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:33.565782    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:33.590543    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:33.623936    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.623936    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:33.629607    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:33.664589    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.664673    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:33.668215    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:33.698892    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.698892    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:33.702344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:33.733428    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.733428    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:33.737226    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:33.764873    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.764873    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:33.768422    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:33.800350    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.800350    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:33.804811    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:33.836711    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.836711    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:33.840164    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:33.869248    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.869333    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:33.869333    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:33.869333    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:33.932626    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:33.933627    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:33.974227    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:33.974227    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:34.066031    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:34.054849   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.056230   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.057835   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.058730   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.060848   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:34.054849   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.056230   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.057835   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.058730   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.060848   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:34.066031    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:34.066031    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:34.092765    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:34.092765    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:36.652871    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:36.677531    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:36.712608    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.712608    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:36.718832    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:36.748298    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.748298    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:36.751762    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:36.783390    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.783403    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:36.787051    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:36.815730    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.815766    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:36.819100    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:36.848875    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.848875    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:36.852925    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:36.886657    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.886657    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:36.890808    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:36.920858    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.920858    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:36.924583    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:36.955882    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.955960    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:36.956001    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:36.956001    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:37.021848    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:37.021848    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:37.060744    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:37.060744    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:37.154895    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:37.142691   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.143928   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.145557   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.147806   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.149984   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:37.142691   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.143928   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.145557   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.147806   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.149984   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:37.154895    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:37.154895    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:37.182385    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:37.182385    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:39.737032    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:39.762115    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:39.792900    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.792900    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:39.797014    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:39.825423    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.825455    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:39.829352    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:39.856679    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.856679    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:39.860615    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:39.891351    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.891351    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:39.895346    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:39.924351    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.924351    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:39.928531    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:39.956447    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.956447    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:39.961810    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:39.987792    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.987792    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:39.991127    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:40.018614    1436 logs.go:282] 0 containers: []
	W1210 07:34:40.018614    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:40.018614    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:40.018614    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:40.082378    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:40.082378    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:40.123506    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:40.123506    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:40.208266    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:40.199944   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201027   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201868   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.204245   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.205189   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:40.199944   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201027   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201868   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.204245   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.205189   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:40.209272    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:40.209272    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:40.239017    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:40.239017    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:42.793527    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:42.818084    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:42.852095    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.852095    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:42.855685    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:42.883269    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.883269    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:42.887287    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:42.918719    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.918800    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:42.923828    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:42.950663    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.950663    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:42.956319    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:42.985991    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.985991    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:42.989729    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:43.017767    1436 logs.go:282] 0 containers: []
	W1210 07:34:43.017824    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:43.021689    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:43.048180    1436 logs.go:282] 0 containers: []
	W1210 07:34:43.048180    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:43.052257    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:43.081092    1436 logs.go:282] 0 containers: []
	W1210 07:34:43.081160    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:43.081183    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:43.081217    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:43.174944    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:43.162932   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.166268   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.169191   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.170321   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.171500   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:43.162932   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.166268   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.169191   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.170321   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.171500   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:43.174992    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:43.174992    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:43.202288    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:43.202807    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:43.249217    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:43.249217    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:43.311267    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:43.311267    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:45.857003    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:45.881743    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:45.911856    1436 logs.go:282] 0 containers: []
	W1210 07:34:45.911856    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:45.915335    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:45.945613    1436 logs.go:282] 0 containers: []
	W1210 07:34:45.945613    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:45.949134    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:45.977768    1436 logs.go:282] 0 containers: []
	W1210 07:34:45.977768    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:45.982182    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:46.010859    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.010859    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:46.014603    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:46.043489    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.043531    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:46.047198    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:46.080651    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.080685    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:46.084319    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:46.116705    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.116780    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:46.121508    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:46.154299    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.154299    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:46.154299    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:46.154299    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:46.222546    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:46.222546    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:46.262468    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:46.262468    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:46.349894    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:46.340418   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.341659   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.342932   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.344391   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.345361   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:46.340418   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.341659   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.342932   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.344391   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.345361   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:46.349894    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:46.349894    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:46.376804    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:46.376804    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:48.931982    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:48.957769    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:48.990182    1436 logs.go:282] 0 containers: []
	W1210 07:34:48.990182    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:48.994255    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:49.021913    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.021913    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:49.026344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:49.054704    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.054704    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:49.058471    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:49.089507    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.089559    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:49.093804    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:49.121462    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.121462    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:49.125755    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:49.156174    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.156174    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:49.160707    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:49.190933    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.190933    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:49.194771    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:49.220610    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.220610    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:49.220610    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:49.220610    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:49.283897    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:49.283897    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:49.324154    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:49.324154    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:49.412165    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:49.404459   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.405604   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.407007   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.408149   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.409161   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:49.404459   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.405604   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.407007   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.408149   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.409161   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:49.412165    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:49.413146    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:49.440045    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:49.440045    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:52.013495    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:52.044149    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:52.080205    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.080205    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:52.084762    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:52.115105    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.115105    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:52.119720    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:52.149672    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.149672    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:52.153985    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:52.186711    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.186711    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:52.192181    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:52.217751    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.217751    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:52.221590    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:52.250827    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.250876    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:52.254668    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:52.284643    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.284643    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:52.288811    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:52.316628    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.316707    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:52.316707    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:52.316707    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:52.348325    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:52.348325    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:52.408110    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:52.408110    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:52.471268    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:52.471268    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:52.511512    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:52.511512    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:52.594976    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:52.587009   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.588398   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.589811   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.591970   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.593048   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:52.587009   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.588398   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.589811   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.591970   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.593048   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:55.100294    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:55.126530    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:55.160945    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.160945    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:55.164755    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:55.196407    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.196407    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:55.199994    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:55.229174    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.229174    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:55.232898    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:55.265856    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.265856    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:55.268892    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:55.302098    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.302121    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:55.305590    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:55.335754    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.335754    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:55.339583    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:55.368170    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.368251    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:55.372008    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:55.397576    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.397576    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:55.397576    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:55.397576    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:55.434345    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:55.434345    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:55.528958    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:55.516781   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.517755   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.519593   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.520640   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.521612   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:55.516781   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.517755   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.519593   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.520640   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.521612   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:55.528958    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:55.528958    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:55.555805    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:55.555805    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:55.602232    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:55.602232    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:58.169858    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:58.195497    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:58.226557    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.226588    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:58.229677    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:58.260817    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.260817    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:58.265378    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:58.293848    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.293920    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:58.297406    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:58.326737    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.326737    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:58.330307    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:58.357319    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.357407    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:58.360727    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:58.392361    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.392405    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:58.395697    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:58.425728    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.425807    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:58.429369    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:58.457816    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.457866    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:58.457866    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:58.457866    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:58.495777    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:58.495777    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:58.585489    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:58.573271   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.574154   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.576361   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.577165   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.579860   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:58.573271   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.574154   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.576361   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.577165   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.579860   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:58.585489    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:58.585489    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:58.613007    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:58.613007    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:58.661382    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:58.661382    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:01.230900    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:01.255356    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:01.292137    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.292190    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:01.297192    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:01.328372    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.328372    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:01.332239    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:01.360635    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.360635    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:01.364529    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:01.391175    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.391175    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:01.394754    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:01.423093    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.423093    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:01.427022    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:01.454965    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.454965    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:01.459137    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:01.487734    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.487734    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:01.492051    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:01.518150    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.518150    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:01.518150    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:01.518150    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:01.580940    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:01.580940    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:01.620363    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:01.620363    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:01.710696    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:01.700163   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.701113   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.703089   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.704462   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.705476   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:01.700163   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.701113   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.703089   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.704462   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.705476   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:01.710696    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:01.710696    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:01.736867    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:01.736867    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:04.295439    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:04.322348    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:04.356895    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.356919    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:04.361858    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:04.396943    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.397019    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:04.401065    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:04.431929    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.431929    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:04.436798    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:04.468073    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.468073    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:04.472528    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:04.503230    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.503230    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:04.506632    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:04.540016    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.540016    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:04.543627    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:04.576446    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.576446    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:04.583292    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:04.611475    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.611542    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:04.611542    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:04.611542    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:04.640376    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:04.640433    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:04.695309    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:04.695309    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:04.756418    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:04.756418    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:04.795089    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:04.795089    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:04.891481    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:04.878108   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.880090   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.883096   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.885167   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.886541   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:04.878108   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.880090   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.883096   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.885167   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.886541   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:07.396688    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:07.422837    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:07.454807    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.454807    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:07.459071    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:07.489720    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.489720    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:07.493466    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:07.519982    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.519982    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:07.523858    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:07.552985    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.552985    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:07.556972    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:07.589709    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.589709    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:07.593709    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:07.621519    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.621519    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:07.625151    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:07.654324    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.654404    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:07.657279    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:07.690913    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.690966    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:07.690988    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:07.690988    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:07.757157    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:07.757157    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:07.796333    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:07.796333    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:07.893954    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:07.881331   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.882766   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.885657   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887077   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887623   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:07.881331   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.882766   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.885657   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887077   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887623   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:07.893954    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:07.893954    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:07.943452    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:07.943452    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:10.496562    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:10.522517    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:10.555517    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.555517    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:10.560160    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:10.591257    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.591306    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:10.594925    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:10.623075    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.623075    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:10.626725    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:10.654115    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.654115    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:10.658014    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:10.689683    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.689683    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:10.693386    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:10.721754    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.721754    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:10.725087    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:10.753052    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.753052    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:10.756926    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:10.787466    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.787466    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:10.787466    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:10.787466    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:10.882563    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:10.873740   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.874902   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.876114   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.877091   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.878349   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:10.873740   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.874902   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.876114   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.877091   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.878349   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:10.882563    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:10.882563    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:10.944299    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:10.944299    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:10.993835    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:10.993835    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:11.053114    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:11.053114    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:13.597304    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:13.621417    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:13.653723    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.653842    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:13.657020    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:13.690175    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.690175    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:13.693954    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:13.723350    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.723350    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:13.728514    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:13.757179    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.757179    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:13.765645    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:13.794387    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.794473    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:13.798130    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:13.826937    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.826937    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:13.830895    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:13.865171    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.865171    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:13.869540    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:13.899920    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.899920    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:13.899920    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:13.899920    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:13.964338    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:13.964338    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:14.028584    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:14.028584    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:14.067840    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:14.067840    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:14.154123    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:14.144490   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.145615   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.146725   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.148037   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.149069   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:14.144490   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.145615   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.146725   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.148037   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.149069   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:14.154123    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:14.154123    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:16.685726    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:16.716822    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:16.753764    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.753827    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:16.757211    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:16.789634    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.789634    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:16.793640    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:16.822677    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.822728    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:16.826522    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:16.853660    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.853660    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:16.858461    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:16.887452    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.887504    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:16.893014    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:16.939344    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.939344    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:16.943118    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:16.971703    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.971781    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:16.974884    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:17.003517    1436 logs.go:282] 0 containers: []
	W1210 07:35:17.003595    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:17.003595    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:17.003595    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:17.088355    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:17.079526   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.080729   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.081812   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.083165   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.084419   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:17.079526   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.080729   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.081812   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.083165   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.084419   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:17.088355    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:17.088355    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:17.117181    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:17.117241    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:17.168070    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:17.168155    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:17.231584    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:17.231584    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:19.776112    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:19.801640    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:19.835886    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.835886    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:19.839626    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:19.872127    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.872127    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:19.876526    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:19.929339    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.929339    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:19.933522    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:19.962400    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.962400    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:19.966133    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:19.994468    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.994544    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:19.998645    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:20.027252    1436 logs.go:282] 0 containers: []
	W1210 07:35:20.027252    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:20.032575    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:20.060153    1436 logs.go:282] 0 containers: []
	W1210 07:35:20.060153    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:20.065171    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:20.091891    1436 logs.go:282] 0 containers: []
	W1210 07:35:20.091891    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:20.091891    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:20.091891    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:20.131103    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:20.131103    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:20.218614    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:20.208033   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.209212   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.210215   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214139   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214965   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:20.208033   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.209212   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.210215   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214139   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214965   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:20.218614    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:20.219146    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:20.245788    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:20.245788    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:20.298111    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:20.298207    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:22.861878    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:22.887649    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:22.922573    1436 logs.go:282] 0 containers: []
	W1210 07:35:22.922573    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:22.926179    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:22.959170    1436 logs.go:282] 0 containers: []
	W1210 07:35:22.959197    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:22.963338    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:22.994510    1436 logs.go:282] 0 containers: []
	W1210 07:35:22.994566    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:22.997861    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:23.029960    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.030036    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:23.033513    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:23.064625    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.064625    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:23.069769    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:23.101906    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.101943    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:23.105651    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:23.136615    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.136615    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:23.140616    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:23.170857    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.170942    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:23.170942    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:23.170942    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:23.233098    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:23.233098    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:23.273238    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:23.273238    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:23.361638    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:23.352696   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.354050   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.356707   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.357782   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.358807   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:23.352696   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.354050   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.356707   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.357782   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.358807   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:23.361638    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:23.361638    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:23.390711    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:23.391230    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:25.949809    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:25.975470    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:26.007496    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.007496    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:26.011469    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:26.044617    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.044617    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:26.048311    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:26.078756    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.078783    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:26.082359    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:26.112113    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.112183    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:26.115713    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:26.148097    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.148097    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:26.151926    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:26.182729    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.182753    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:26.186743    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:26.217219    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.217219    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:26.223773    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:26.251643    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.251713    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:26.251713    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:26.251713    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:26.278698    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:26.278698    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:26.332014    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:26.332014    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:26.394304    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:26.394304    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:26.433073    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:26.433073    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:26.519395    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:26.506069   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.507354   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.509591   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.512516   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.514125   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:26.506069   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.507354   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.509591   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.512516   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.514125   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:29.024398    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:29.049372    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:29.084989    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.085019    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:29.089078    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:29.116420    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.116420    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:29.120531    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:29.149880    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.149880    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:29.153505    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:29.181726    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.181790    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:29.185295    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:29.216713    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.216713    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:29.222568    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:29.249487    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.249487    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:29.253512    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:29.283473    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.283497    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:29.287061    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:29.313225    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.313225    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:29.313225    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:29.313225    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:29.399665    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:29.386954   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.388181   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.390621   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.391811   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.393167   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:29.386954   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.388181   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.390621   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.391811   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.393167   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:29.399665    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:29.399665    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:29.428593    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:29.428593    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:29.477815    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:29.477877    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:29.541874    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:29.541874    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:32.087876    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:32.113456    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:32.145773    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.145805    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:32.149787    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:32.178912    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.178987    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:32.182700    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:32.213301    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.213301    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:32.217129    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:32.246756    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.246824    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:32.250299    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:32.278791    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.278835    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:32.282397    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:32.316208    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.316278    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:32.320233    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:32.349155    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.349155    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:32.352807    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:32.386875    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.386875    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:32.386944    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:32.386944    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:32.479781    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:32.469750   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.470693   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.473307   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.474321   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.475302   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:32.469750   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.470693   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.473307   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.474321   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.475302   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:32.479781    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:32.479781    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:32.506994    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:32.506994    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:32.561757    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:32.561757    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:32.624545    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:32.624545    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:35.176040    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:35.201056    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:35.235735    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.235735    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:35.239655    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:35.267349    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.267416    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:35.270515    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:35.303264    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.303264    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:35.306371    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:35.339037    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.339263    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:35.343297    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:35.375639    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.375639    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:35.379647    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:35.407670    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.407670    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:35.411506    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:35.446240    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.446240    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:35.450265    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:35.477814    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.477814    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:35.477814    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:35.477814    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:35.541174    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:35.541174    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:35.581633    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:35.581633    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:35.673254    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:35.664165   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.665345   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.666190   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.668510   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.669467   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:35.664165   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.665345   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.666190   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.668510   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.669467   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:35.673254    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:35.673254    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:35.701200    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:35.701200    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:38.255869    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:38.281759    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:38.316123    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.316123    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:38.319358    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:38.348903    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.348943    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:38.352900    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:38.381759    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.381795    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:38.385361    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:38.414524    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.414586    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:38.417710    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:38.447131    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.447205    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:38.451100    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:38.479508    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.479543    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:38.483003    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:38.512848    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.512848    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:38.516967    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:38.547680    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.547680    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:38.547680    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:38.547680    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:38.614038    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:38.614038    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:38.658448    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:38.658448    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:38.743054    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:38.733038   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.734073   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.735791   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.738099   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.739595   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:38.733038   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.734073   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.735791   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.738099   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.739595   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:38.743054    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:38.743054    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:38.775152    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:38.775214    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:41.333835    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:41.358081    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:41.393471    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.393471    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:41.396774    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:41.425173    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.425224    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:41.428523    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:41.456663    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.456663    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:41.459654    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:41.490212    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.490212    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:41.493250    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:41.523505    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.523505    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:41.527006    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:41.555529    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.555529    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:41.559605    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:41.590913    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.591011    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:41.596392    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:41.627361    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.627421    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:41.627441    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:41.627538    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:41.692948    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:41.692948    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:41.731909    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:41.731909    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:41.816121    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:41.806508   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.807705   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.808985   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.810306   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.811462   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:41.806508   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.807705   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.808985   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.810306   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.811462   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:41.816121    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:41.816121    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:41.844622    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:41.844622    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:44.401865    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:44.426294    1436 out.go:203] 
	W1210 07:35:44.428631    1436 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1210 07:35:44.428631    1436 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1210 07:35:44.428631    1436 out.go:285] * Related issues:
	W1210 07:35:44.428631    1436 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1210 07:35:44.428631    1436 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1210 07:35:44.430629    1436 out.go:203] 
	
	
	==> Docker <==
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216617054Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216699662Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216710563Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216717064Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216722865Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216746967Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.216779770Z" level=info msg="Initializing buildkit"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.379150718Z" level=info msg="Completed buildkit initialization"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.395276092Z" level=info msg="Daemon has completed initialization"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.395426306Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.395462310Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 07:29:38 newest-cni-525200 dockerd[928]: time="2025-12-10T07:29:38.395512215Z" level=info msg="API listen on [::]:2376"
	Dec 10 07:29:38 newest-cni-525200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 07:29:39 newest-cni-525200 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Loaded network plugin cni"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 07:29:39 newest-cni-525200 cri-dockerd[1221]: time="2025-12-10T07:29:39Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 07:29:39 newest-cni-525200 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:36:00.962287   20281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:36:00.963208   20281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:36:00.965674   20281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:36:00.967245   20281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:36:00.969625   20281 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +7.347496] CPU: 6 PID: 490841 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fe73ddc4b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7fe73ddc4af6.
	[  +0.000000] RSP: 002b:00007ffc57a05a90 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000000] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.867258] CPU: 5 PID: 491006 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f1a7acb4b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f1a7acb4af6.
	[  +0.000001] RSP: 002b:00007ffe19029200 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 07:32] tmpfs: Unknown parameter 'noswap'
	[ +15.541609] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 07:36:01 up  3:04,  0 user,  load average: 1.95, 3.48, 4.29
	Linux newest-cni-525200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:35:58 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:58 newest-cni-525200 kubelet[20098]: E1210 07:35:58.171847   20098 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:35:58 newest-cni-525200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:35:58 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:35:58 newest-cni-525200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 10 07:35:58 newest-cni-525200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:58 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:58 newest-cni-525200 kubelet[20122]: E1210 07:35:58.903419   20122 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:35:58 newest-cni-525200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:35:58 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:35:59 newest-cni-525200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
	Dec 10 07:35:59 newest-cni-525200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:59 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:35:59 newest-cni-525200 kubelet[20152]: E1210 07:35:59.621084   20152 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:35:59 newest-cni-525200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:35:59 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:36:00 newest-cni-525200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9.
	Dec 10 07:36:00 newest-cni-525200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:36:00 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:36:00 newest-cni-525200 kubelet[20166]: E1210 07:36:00.403702   20166 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:36:00 newest-cni-525200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:36:00 newest-cni-525200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:36:01 newest-cni-525200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10.
	Dec 10 07:36:01 newest-cni-525200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:36:01 newest-cni-525200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-525200 -n newest-cni-525200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-525200 -n newest-cni-525200: exit status 2 (592.8927ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-525200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (12.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (232s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1210 07:43:29.535046   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:43:51.744708   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:44:18.974415   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:44:33.287098   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:44:45.988808   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:44:50.403054   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:44:58.170713   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-144100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:45:30.298725   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:45:57.091452   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:45:59.103039   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:46:11.467323   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1210 07:46:13.475746   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57440/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-099700 -n no-preload-099700
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-099700 -n no-preload-099700: exit status 2 (720.8519ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-099700" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-099700 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-099700 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (65.7µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-099700 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-099700
helpers_test.go:244: (dbg) docker inspect no-preload-099700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11",
	        "Created": "2025-12-10T07:17:13.908925425Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 451860,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:27:59.880122532Z",
	            "FinishedAt": "2025-12-10T07:27:56.24098096Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/hostname",
	        "HostsPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/hosts",
	        "LogPath": "/var/lib/docker/containers/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11/a93123bad5896cf83b951da83d594318cf6ba9c38e1652dff7f1b9ceeb706b11-json.log",
	        "Name": "/no-preload-099700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-099700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-099700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a-init/diff:/var/lib/docker/overlay2/66b84942d615224a755084aec288bcbcbc35c83cb84690edf875a5c72dd9709b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2e0f665ffb9a370dbfc1fbedf7e6587e9044599e512720d138cf9e069c8a7d6a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-099700",
	                "Source": "/var/lib/docker/volumes/no-preload-099700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-099700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-099700",
	                "name.minikube.sigs.k8s.io": "no-preload-099700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36b12f7c82c546811ea16d124f8782cdd27350c19ac1d3ab3f547c6a6d9a2eab",
	            "SandboxKey": "/var/run/docker/netns/36b12f7c82c5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57436"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57439"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57440"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-099700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "19fb5b7ebc44993ca33ebb33ab9b189e482cb385e465c509a613326e2c10eb7e",
	                    "EndpointID": "5663a1495caac3a8be49ce34bbbb4f5a9e88b108cb75e92d2208550cc897ee2e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-099700",
	                        "a93123bad589"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-099700 -n no-preload-099700
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-099700 -n no-preload-099700: exit status 2 (600.7149ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-099700 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-099700 logs -n 25: (2.2249365s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                          ARGS                                          │        PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-648600 sudo cat /var/lib/kubelet/config.yaml                         │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status docker --all --full --no-pager          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat docker --no-pager                          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/docker/daemon.json                              │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo docker system info                                       │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status cri-docker --all --full --no-pager      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat cri-docker --no-pager                      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /usr/lib/systemd/system/cri-docker.service           │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cri-dockerd --version                                    │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status containerd --all --full --no-pager      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat containerd --no-pager                      │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /lib/systemd/system/containerd.service               │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo cat /etc/containerd/config.toml                          │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo containerd config dump                                   │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl status crio --all --full --no-pager            │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │                     │
	│ ssh     │ -p custom-flannel-648600 sudo systemctl cat crio --no-pager                            │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ ssh     │ -p custom-flannel-648600 sudo crio config                                              │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ delete  │ -p custom-flannel-648600                                                               │ custom-flannel-648600 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:33 UTC │ 10 Dec 25 07:33 UTC │
	│ image   │ newest-cni-525200 image list --format=json                                             │ newest-cni-525200     │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ pause   │ -p newest-cni-525200 --alsologtostderr -v=1                                            │ newest-cni-525200     │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ unpause │ -p newest-cni-525200 --alsologtostderr -v=1                                            │ newest-cni-525200     │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ delete  │ -p newest-cni-525200                                                                   │ newest-cni-525200     │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:36 UTC │ 10 Dec 25 07:36 UTC │
	│ delete  │ -p newest-cni-525200                                                                   │ newest-cni-525200     │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 07:36 UTC │ 10 Dec 25 07:36 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:31:27
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:31:27.429465    2240 out.go:360] Setting OutFile to fd 1904 ...
	I1210 07:31:27.483636    2240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:31:27.483636    2240 out.go:374] Setting ErrFile to fd 1148...
	I1210 07:31:27.483636    2240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:31:27.498633    2240 out.go:368] Setting JSON to false
	I1210 07:31:27.500624    2240 start.go:133] hostinfo: {"hostname":"minikube4","uptime":10819,"bootTime":1765341068,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 07:31:27.500624    2240 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 07:31:27.505874    2240 out.go:179] * [custom-flannel-648600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 07:31:27.510785    2240 notify.go:221] Checking for updates...
	I1210 07:31:27.513604    2240 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:31:27.516776    2240 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:31:27.521423    2240 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 07:31:27.524646    2240 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 07:31:27.526628    2240 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1210 07:31:23.340249    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:31:27.530138    2240 config.go:182] Loaded profile config "false-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:31:27.530637    2240 config.go:182] Loaded profile config "newest-cni-525200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:31:27.530927    2240 config.go:182] Loaded profile config "no-preload-099700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 07:31:27.531072    2240 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:31:27.674116    2240 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 07:31:27.679999    2240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:31:27.935225    2240 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:31:27.906881904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:31:27.940210    2240 out.go:179] * Using the docker driver based on user configuration
	I1210 07:31:27.947210    2240 start.go:309] selected driver: docker
	I1210 07:31:27.947210    2240 start.go:927] validating driver "docker" against <nil>
	I1210 07:31:27.947210    2240 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:31:28.038927    2240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:31:28.306393    2240 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:31:28.276193336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:31:28.307456    2240 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:31:28.308474    2240 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:31:28.311999    2240 out.go:179] * Using Docker Desktop driver with root privileges
	I1210 07:31:28.314563    2240 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1210 07:31:28.314921    2240 start_flags.go:336] Found "testdata\\kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1210 07:31:28.314921    2240 start.go:353] cluster config:
	{Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:31:28.317704    2240 out.go:179] * Starting "custom-flannel-648600" primary control-plane node in "custom-flannel-648600" cluster
	I1210 07:31:28.318967    2240 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 07:31:28.320981    2240 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1210 07:31:23.421229    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:23.421229    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:23.460218    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:23.460218    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:23.544413    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:23.535582    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.536730    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.537358    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.539749    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.540912    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:23.535582    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.536730    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.537358    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.539749    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:23.540912    5602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:26.050161    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:26.077105    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:26.111827    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.111827    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:26.116713    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:26.160114    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.160114    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:26.163744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:26.201139    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.201139    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:26.204831    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:26.240411    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.240462    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:26.244533    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:26.280463    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.280463    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:26.285443    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:26.317450    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.317450    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:26.320454    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:26.356058    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.356058    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:26.360642    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:26.406955    1436 logs.go:282] 0 containers: []
	W1210 07:31:26.406994    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:26.407032    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:26.407032    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:26.486801    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:26.486845    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:26.525844    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:26.525844    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:26.629730    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:26.619679    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.620896    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.621633    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623054    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623677    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:26.619679    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.620896    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.621633    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623054    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:26.623677    5754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:26.630733    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:26.630733    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:26.786973    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:26.786973    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:28.323967    2240 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:31:28.323967    2240 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 07:31:28.370604    2240 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:31:28.410253    2240 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1210 07:31:28.410253    2240 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	W1210 07:31:28.586590    2240 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 07:31:28.586590    2240 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\config.json ...
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:31:28.586590    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\config.json: {Name:mk37135597d0b3e0094e1cb1b5ff50d942db06b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:31:28.586590    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:31:28.587928    2240 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:31:28.587928    2240 start.go:360] acquireMachinesLock for custom-flannel-648600: {Name:mk4a3a34c58cff29c46217d57a91ed79fc9f522b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:28.588459    2240 start.go:364] duration metric: took 531.3µs to acquireMachinesLock for "custom-flannel-648600"
	I1210 07:31:28.588615    2240 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false D
isableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:31:28.588742    2240 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:31:28.592548    2240 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 07:31:28.593172    2240 start.go:159] libmachine.API.Create for "custom-flannel-648600" (driver="docker")
	I1210 07:31:28.593172    2240 client.go:173] LocalClient.Create starting
	I1210 07:31:28.593172    2240 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1210 07:31:28.594367    2240 main.go:143] libmachine: Decoding PEM data...
	I1210 07:31:28.594367    2240 main.go:143] libmachine: Parsing certificate...
	I1210 07:31:28.594367    2240 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1210 07:31:28.595268    2240 main.go:143] libmachine: Decoding PEM data...
	I1210 07:31:28.595268    2240 main.go:143] libmachine: Parsing certificate...
	I1210 07:31:28.601656    2240 cli_runner.go:164] Run: docker network inspect custom-flannel-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:31:28.702719    2240 cli_runner.go:211] docker network inspect custom-flannel-648600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:31:28.710721    2240 network_create.go:284] running [docker network inspect custom-flannel-648600] to gather additional debugging logs...
	I1210 07:31:28.710721    2240 cli_runner.go:164] Run: docker network inspect custom-flannel-648600
	W1210 07:31:28.938963    2240 cli_runner.go:211] docker network inspect custom-flannel-648600 returned with exit code 1
	I1210 07:31:28.938963    2240 network_create.go:287] error running [docker network inspect custom-flannel-648600]: docker network inspect custom-flannel-648600: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-648600 not found
	I1210 07:31:28.938963    2240 network_create.go:289] output of [docker network inspect custom-flannel-648600]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-648600 not found
	
	** /stderr **
	I1210 07:31:28.945949    2240 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:31:29.091971    2240 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:29.381586    2240 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:29.465291    2240 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016a8ae0}
	I1210 07:31:29.465291    2240 network_create.go:124] attempt to create docker network custom-flannel-648600 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1210 07:31:29.470056    2240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600
	W1210 07:31:30.046347    2240 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600 returned with exit code 1
	W1210 07:31:30.046347    2240 network_create.go:149] failed to create docker network custom-flannel-648600 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1210 07:31:30.046347    2240 network_create.go:116] failed to create docker network custom-flannel-648600 192.168.67.0/24, will retry: subnet is taken
	I1210 07:31:30.140283    2240 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:30.262644    2240 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017e1d40}
	I1210 07:31:30.262866    2240 network_create.go:124] attempt to create docker network custom-flannel-648600 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 07:31:30.267646    2240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600
	W1210 07:31:30.581811    2240 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600 returned with exit code 1
	W1210 07:31:30.581811    2240 network_create.go:149] failed to create docker network custom-flannel-648600 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1210 07:31:30.581811    2240 network_create.go:116] failed to create docker network custom-flannel-648600 192.168.76.0/24, will retry: subnet is taken
	I1210 07:31:30.621040    2240 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1210 07:31:30.648052    2240 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cde450}
	I1210 07:31:30.648052    2240 network_create.go:124] attempt to create docker network custom-flannel-648600 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1210 07:31:30.656045    2240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-648600 custom-flannel-648600
	I1210 07:31:30.870907    2240 network_create.go:108] docker network custom-flannel-648600 192.168.85.0/24 created
	I1210 07:31:30.870907    2240 kic.go:121] calculated static IP "192.168.85.2" for the "custom-flannel-648600" container
	I1210 07:31:30.881906    2240 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:31:31.006456    2240 cli_runner.go:164] Run: docker volume create custom-flannel-648600 --label name.minikube.sigs.k8s.io=custom-flannel-648600 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:31:31.098467    2240 oci.go:103] Successfully created a docker volume custom-flannel-648600
	I1210 07:31:31.104469    2240 cli_runner.go:164] Run: docker run --rm --name custom-flannel-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-648600 --entrypoint /usr/bin/test -v custom-flannel-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1210 07:31:31.792496    2240 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.792496    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1210 07:31:31.792496    2240 cache.go:107] acquiring lock: {Name:mk712909f8cebe825fb8a435a44c90c8d2ca7c32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.792496    2240 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.2058554s
	I1210 07:31:31.792496    2240 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1210 07:31:31.792496    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 exists
	I1210 07:31:31.792496    2240 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.34.3" took 3.2053301s
	I1210 07:31:31.792496    2240 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 succeeded
	I1210 07:31:31.794500    2240 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.794500    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1210 07:31:31.794500    2240 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.2078599s
	I1210 07:31:31.795487    2240 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1210 07:31:31.796493    2240 cache.go:107] acquiring lock: {Name:mk4347e5873954a8abe84535c3dbf0018de433f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.796493    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 exists
	I1210 07:31:31.796493    2240 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.34.3" took 3.2098526s
	I1210 07:31:31.796493    2240 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 succeeded
	I1210 07:31:31.809204    2240 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.809204    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1210 07:31:31.809204    2240 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.2225634s
	I1210 07:31:31.809728    2240 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1210 07:31:31.821783    2240 cache.go:107] acquiring lock: {Name:mk2c17fa70a505b9214cef4e32cab64efc0821c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.822582    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 exists
	I1210 07:31:31.822582    2240 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.34.3" took 3.2354164s
	I1210 07:31:31.822582    2240 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 succeeded
	I1210 07:31:31.828690    2240 cache.go:107] acquiring lock: {Name:mk6b392ee3857c8c549222e5d4bc0d459f0d6374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.828690    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 exists
	I1210 07:31:31.828690    2240 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.12.1" took 3.2420491s
	I1210 07:31:31.828690    2240 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 succeeded
	I1210 07:31:31.868175    2240 cache.go:107] acquiring lock: {Name:mkc3ba12d2dc6e754bbb22b72e061ebdaf85fee6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:31:31.869189    2240 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 exists
	I1210 07:31:31.869189    2240 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.3" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.34.3" took 3.2820228s
	I1210 07:31:31.869189    2240 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 succeeded
	I1210 07:31:31.869189    2240 cache.go:87] Successfully saved all images to host disk.
	I1210 07:31:29.397246    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:29.477876    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:29.605797    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.605797    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:29.612110    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:29.728807    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.728807    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:29.734404    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:29.836328    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.836328    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:29.841346    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:29.932721    1436 logs.go:282] 0 containers: []
	W1210 07:31:29.933712    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:29.938725    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:30.029301    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.029301    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:30.034503    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:30.132157    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.132157    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:30.137284    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:30.276443    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.276443    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:30.284280    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:30.440215    1436 logs.go:282] 0 containers: []
	W1210 07:31:30.440215    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:30.440215    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:30.440215    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:30.586863    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:30.586863    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:30.654056    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:30.654056    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:30.825025    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:30.810195    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.812029    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.813542    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.815272    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.818214    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:30.810195    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.812029    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.813542    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.815272    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:30.818214    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:30.825083    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:30.825083    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:30.883913    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:30.883913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:32.772569    2240 cli_runner.go:217] Completed: docker run --rm --name custom-flannel-648600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-648600 --entrypoint /usr/bin/test -v custom-flannel-648600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.6680738s)
	I1210 07:31:32.772569    2240 oci.go:107] Successfully prepared a docker volume custom-flannel-648600
	I1210 07:31:32.772569    2240 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:31:32.777565    2240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:31:33.023291    2240 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-10 07:31:33.001747684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 07:31:33.027286    2240 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:31:33.264619    2240 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-648600 --name custom-flannel-648600 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-648600 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-648600 --network custom-flannel-648600 --ip 192.168.85.2 --volume custom-flannel-648600:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1210 07:31:34.003194    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Running}}
	I1210 07:31:34.069196    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:31:34.137196    2240 cli_runner.go:164] Run: docker exec custom-flannel-648600 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:31:34.255530    2240 oci.go:144] the created container "custom-flannel-648600" has a running status.
	I1210 07:31:34.255530    2240 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa...
	I1210 07:31:34.371827    2240 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:31:34.454671    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:31:34.514682    2240 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:31:34.514682    2240 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-648600 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:31:34.665673    2240 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa...
	I1210 07:31:37.044619    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:31:37.095607    2240 machine.go:94] provisionDockerMachine start ...
	I1210 07:31:37.098607    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:37.155601    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:37.171620    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:37.171620    2240 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:31:37.347331    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-648600
	
	I1210 07:31:37.347331    2240 ubuntu.go:182] provisioning hostname "custom-flannel-648600"
	I1210 07:31:37.350327    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:37.408671    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:37.409222    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:37.409222    2240 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-648600 && echo "custom-flannel-648600" | sudo tee /etc/hostname
	W1210 07:31:33.500806    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:31:33.522798    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:33.542801    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:33.574796    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.574796    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:33.577799    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:33.609805    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.609805    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:33.613806    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:33.647528    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.647528    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:33.650525    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:33.682527    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.683531    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:33.686536    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:33.715528    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.715528    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:33.718520    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:33.752522    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.752522    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:33.755526    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:33.789961    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.789961    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:33.794804    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:33.824805    1436 logs.go:282] 0 containers: []
	W1210 07:31:33.824805    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:33.824805    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:33.824805    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:33.908771    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:33.908771    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:33.958763    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:33.958763    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:34.080194    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:34.067865    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.069363    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.070689    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.072220    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.073030    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:34.067865    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.069363    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.070689    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.072220    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:34.073030    6096 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:34.080194    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:34.080194    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:34.114208    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:34.114208    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:36.683658    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:36.704830    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:36.739690    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.739690    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:36.742694    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:36.772249    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.772249    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:36.776265    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:36.812803    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.812803    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:36.816811    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:36.849259    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.849259    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:36.852518    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:36.890605    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.890605    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:36.895610    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:36.937605    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.937605    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:36.942601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:36.979599    1436 logs.go:282] 0 containers: []
	W1210 07:31:36.979599    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:36.984601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:37.022606    1436 logs.go:282] 0 containers: []
	W1210 07:31:37.022606    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:37.022606    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:37.022606    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:37.086612    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:37.086612    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:37.128602    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:37.128602    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:37.225605    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:37.215773    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.216591    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.218566    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.219365    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.221626    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:37.215773    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.216591    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.218566    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.219365    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:37.221626    6265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:37.225605    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:37.225605    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:37.254615    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:37.254615    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:37.617301    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-648600
	
	I1210 07:31:37.621329    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:37.680493    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:37.681514    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:37.681514    2240 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-648600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-648600/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-648600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:31:37.850452    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:31:37.850452    2240 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1210 07:31:37.850452    2240 ubuntu.go:190] setting up certificates
	I1210 07:31:37.850452    2240 provision.go:84] configureAuth start
	I1210 07:31:37.855263    2240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-648600
	I1210 07:31:37.926854    2240 provision.go:143] copyHostCerts
	I1210 07:31:37.927569    2240 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1210 07:31:37.927608    2240 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1210 07:31:37.928059    2240 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1210 07:31:37.928961    2240 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1210 07:31:37.928961    2240 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1210 07:31:37.928961    2240 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1210 07:31:37.930358    2240 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1210 07:31:37.930390    2240 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1210 07:31:37.930744    2240 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I1210 07:31:37.931754    2240 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.custom-flannel-648600 san=[127.0.0.1 192.168.85.2 custom-flannel-648600 localhost minikube]
	I1210 07:31:38.038131    2240 provision.go:177] copyRemoteCerts
	I1210 07:31:38.042277    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:31:38.045314    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.098793    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:38.243502    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 07:31:38.284050    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I1210 07:31:38.320436    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:31:38.351829    2240 provision.go:87] duration metric: took 501.3694ms to configureAuth
	I1210 07:31:38.351829    2240 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:31:38.352840    2240 config.go:182] Loaded profile config "custom-flannel-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:31:38.355824    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.405824    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:38.405824    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:38.405824    2240 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1210 07:31:38.582107    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1210 07:31:38.582107    2240 ubuntu.go:71] root file system type: overlay
	I1210 07:31:38.582107    2240 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1210 07:31:38.585874    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.646407    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:38.646407    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:38.646407    2240 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1210 07:31:38.847766    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1210 07:31:38.852241    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:38.938899    2240 main.go:143] libmachine: Using SSH client type: native
	I1210 07:31:38.938899    2240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff61df9fd00] 0x7ff61dfa2860 <nil>  [] 0s} 127.0.0.1 58200 <nil> <nil>}
	I1210 07:31:38.938899    2240 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1210 07:31:40.711527    2240 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-10 07:31:38.832035101 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1210 07:31:40.711665    2240 machine.go:97] duration metric: took 3.616002s to provisionDockerMachine
	I1210 07:31:40.711665    2240 client.go:176] duration metric: took 12.1183047s to LocalClient.Create
	I1210 07:31:40.711665    2240 start.go:167] duration metric: took 12.1183047s to libmachine.API.Create "custom-flannel-648600"
	I1210 07:31:40.711665    2240 start.go:293] postStartSetup for "custom-flannel-648600" (driver="docker")
	I1210 07:31:40.711665    2240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:31:40.715645    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:31:40.718723    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:40.776513    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:40.917451    2240 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:31:40.923444    2240 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:31:40.923444    2240 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:31:40.923444    2240 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1210 07:31:40.924452    2240 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1210 07:31:40.924452    2240 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem -> 113042.pem in /etc/ssl/certs
	I1210 07:31:40.929458    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 07:31:40.942452    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /etc/ssl/certs/113042.pem (1708 bytes)
	I1210 07:31:40.977491    2240 start.go:296] duration metric: took 265.8211ms for postStartSetup
	I1210 07:31:40.981481    2240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-648600
	I1210 07:31:41.034489    2240 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\config.json ...
	I1210 07:31:41.039496    2240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:31:41.043532    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:41.111672    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:41.255080    2240 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:31:41.269938    2240 start.go:128] duration metric: took 12.6809984s to createHost
	I1210 07:31:41.269938    2240 start.go:83] releasing machines lock for "custom-flannel-648600", held for 12.6812262s
	I1210 07:31:41.273664    2240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-648600
	I1210 07:31:41.324666    2240 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1210 07:31:41.329678    2240 ssh_runner.go:195] Run: cat /version.json
	I1210 07:31:41.329678    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:41.334670    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:41.381680    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:31:41.381680    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	W1210 07:31:41.497715    2240 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1210 07:31:41.501431    2240 ssh_runner.go:195] Run: systemctl --version
	I1210 07:31:41.518880    2240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:31:41.528176    2240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:31:41.531184    2240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:31:41.579185    2240 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 07:31:41.579185    2240 start.go:496] detecting cgroup driver to use...
	I1210 07:31:41.579185    2240 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:31:41.579185    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1210 07:31:41.596178    2240 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1210 07:31:41.596178    2240 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1210 07:31:41.606178    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1210 07:31:41.626187    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1210 07:31:41.641198    2240 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1210 07:31:41.645182    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1210 07:31:41.668187    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:31:41.687179    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1210 07:31:41.706179    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1210 07:31:41.724180    2240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:31:41.742180    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1210 07:31:41.759185    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1210 07:31:41.778184    2240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1210 07:31:41.795180    2240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:31:41.811185    2240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:31:41.828187    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:41.983806    2240 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1210 07:31:42.163822    2240 start.go:496] detecting cgroup driver to use...
	I1210 07:31:42.163822    2240 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:31:42.167818    2240 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1210 07:31:42.193819    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:31:42.216825    2240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:31:42.280833    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:31:42.301820    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1210 07:31:42.320823    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:31:42.345832    2240 ssh_runner.go:195] Run: which cri-dockerd
	I1210 07:31:42.358831    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1210 07:31:42.373835    2240 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1210 07:31:42.401822    2240 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1210 07:31:39.808959    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:39.828946    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:39.859949    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.859949    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:39.862944    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:39.896961    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.896961    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:39.901952    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:39.936950    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.936950    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:39.939955    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:39.969949    1436 logs.go:282] 0 containers: []
	W1210 07:31:39.969949    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:39.972954    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:40.002949    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.002949    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:40.006946    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:40.036957    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.036957    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:40.039947    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:40.098959    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.098959    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:40.102955    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:40.149957    1436 logs.go:282] 0 containers: []
	W1210 07:31:40.149957    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:40.149957    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:40.149957    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:40.191850    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:40.192845    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:40.293665    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:40.277190    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283148    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283982    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.286428    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.287149    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:40.277190    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283148    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.283982    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.286428    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:40.287149    6432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:40.293665    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:40.293665    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:40.325883    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:40.325883    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:40.379885    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:40.379885    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:42.947835    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:42.966833    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:43.000857    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.000857    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:43.003835    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:43.034830    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.034830    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:43.037843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:43.069836    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.069836    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:43.073842    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:43.105424    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.105465    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:43.109492    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:43.143411    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.143411    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:43.147409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:43.179168    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.179168    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:43.183167    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:43.211281    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.211281    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:43.214141    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:43.248141    1436 logs.go:282] 0 containers: []
	W1210 07:31:43.248141    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:43.248141    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:43.248141    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:43.314876    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:43.314876    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:43.357233    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:43.357233    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 07:31:42.551686    2240 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1210 07:31:42.712827    2240 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1210 07:31:42.712827    2240 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1210 07:31:42.735824    2240 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1210 07:31:42.756828    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:42.906845    2240 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1210 07:31:43.937123    2240 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0302614s)
	I1210 07:31:43.944887    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:31:43.971819    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1210 07:31:43.996364    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:31:44.030377    2240 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1210 07:31:44.173489    2240 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1210 07:31:44.332105    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:44.483148    2240 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1210 07:31:44.509404    2240 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1210 07:31:44.533765    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:31:44.690011    2240 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1210 07:31:44.790147    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1210 07:31:44.810716    2240 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1210 07:31:44.813714    2240 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1210 07:31:44.820719    2240 start.go:564] Will wait 60s for crictl version
	I1210 07:31:44.824717    2240 ssh_runner.go:195] Run: which crictl
	I1210 07:31:44.835701    2240 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:31:44.880457    2240 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1210 07:31:44.883920    2240 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:31:44.928460    2240 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1210 07:31:45.060104    2240 out.go:252] * Preparing Kubernetes v1.34.3 on Docker 29.1.2 ...
	I1210 07:31:45.062900    2240 cli_runner.go:164] Run: docker exec -t custom-flannel-648600 dig +short host.docker.internal
	I1210 07:31:45.193754    2240 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1210 07:31:45.197851    2240 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1210 07:31:45.204880    2240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:31:45.225085    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:31:45.282870    2240 kubeadm.go:884] updating cluster {Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableC
oreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:31:45.283875    2240 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 07:31:45.286873    2240 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1210 07:31:45.317078    2240 docker.go:691] Got preloaded images: 
	I1210 07:31:45.317078    2240 docker.go:697] registry.k8s.io/kube-apiserver:v1.34.3 wasn't preloaded
	I1210 07:31:45.317078    2240 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.3 registry.k8s.io/kube-controller-manager:v1.34.3 registry.k8s.io/kube-scheduler:v1.34.3 registry.k8s.io/kube-proxy:v1.34.3 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 07:31:45.330428    2240 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:45.336331    2240 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.341435    2240 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:45.341435    2240 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.347452    2240 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.347452    2240 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.352434    2240 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:45.355426    2240 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.358455    2240 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:45.361429    2240 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.365434    2240 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 07:31:45.366439    2240 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:45.369440    2240 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:45.370428    2240 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:45.374431    2240 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 07:31:45.379430    2240 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	W1210 07:31:45.411422    2240 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.466193    2240 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.518621    2240 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.573883    2240 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.622874    2240 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.672905    2240 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.723034    2240 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 07:31:45.771034    2240 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.12.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1210 07:31:45.842424    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.842823    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.869734    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.884370    2240 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.3" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.3" does not exist at hash "aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c" in container runtime
	I1210 07:31:45.884370    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:31:45.884370    2240 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.3" needs transfer: "registry.k8s.io/kube-proxy:v1.34.3" does not exist at hash "36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691" in container runtime
	I1210 07:31:45.884370    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:31:45.884370    2240 docker.go:338] Removing image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.884370    2240 docker.go:338] Removing image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.890739    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.34.3
	I1210 07:31:45.890951    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.34.3
	I1210 07:31:45.897121    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:45.901151    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:45.922366    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1210 07:31:45.956325    2240 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.3" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.3" does not exist at hash "5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942" in container runtime
	I1210 07:31:45.956325    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:31:45.956325    2240 docker.go:338] Removing image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.961320    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 07:31:45.992754    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:46.045432    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 07:31:46.045432    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 07:31:46.053082    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:31:46.053082    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:31:46.059786    2240 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.3" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.3" does not exist at hash "aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78" in container runtime
	I1210 07:31:46.059786    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:31:46.059786    2240 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1210 07:31:46.060783    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:31:46.060783    2240 docker.go:338] Removing image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:46.060783    2240 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:46.065694    2240 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1210 07:31:46.065694    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:31:46.065694    2240 docker.go:338] Removing image: registry.k8s.io/pause:3.10.1
	I1210 07:31:46.067530    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.34.3
	I1210 07:31:46.067911    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1210 07:31:46.068609    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 07:31:46.070610    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1210 07:31:46.073597    2240 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:46.074603    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:31:46.146816    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.3': No such file or directory
	I1210 07:31:46.146816    2240 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1210 07:31:46.146816    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.3': No such file or directory
	I1210 07:31:46.146816    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:31:46.147805    2240 docker.go:338] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:46.147805    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 --> /var/lib/minikube/images/kube-apiserver_v1.34.3 (27075584 bytes)
	I1210 07:31:46.147805    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 --> /var/lib/minikube/images/kube-proxy_v1.34.3 (25966592 bytes)
	I1210 07:31:46.151807    2240 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1210 07:31:46.255114    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 07:31:46.255114    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 07:31:46.261151    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:31:46.262119    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:31:46.272115    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 07:31:46.272115    2240 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 07:31:46.272115    2240 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:31:46.272115    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.3': No such file or directory
	I1210 07:31:46.272115    2240 docker.go:338] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:46.272115    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 --> /var/lib/minikube/images/kube-controller-manager_v1.34.3 (22830080 bytes)
	I1210 07:31:46.277116    2240 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:31:46.278121    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1210 07:31:46.289109    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 07:31:46.293116    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:31:46.475787    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.3: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.3': No such file or directory
	I1210 07:31:46.475787    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1210 07:31:46.475787    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1210 07:31:46.475787    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 --> /var/lib/minikube/images/kube-scheduler_v1.34.3 (17393664 bytes)
	I1210 07:31:46.476808    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1210 07:31:46.476808    2240 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 07:31:46.476808    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1210 07:31:46.476808    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1210 07:31:46.476808    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1210 07:31:46.481795    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:31:46.504793    2240 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1210 07:31:46.504793    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1210 07:31:46.672791    2240 docker.go:305] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1210 07:31:46.672791    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1210 07:31:47.172597    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 from cache
	I1210 07:31:47.208589    2240 docker.go:305] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 07:31:47.208589    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	W1210 07:31:43.531620    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	W1210 07:31:43.451546    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:43.441909    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443033    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443856    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.446062    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.447664    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:43.441909    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443033    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.443856    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.446062    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:43.447664    6595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:43.452560    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:43.452560    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:43.479539    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:43.479539    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:46.056731    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:46.081601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:46.111531    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.111531    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:46.116512    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:46.149808    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.149808    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:46.155807    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:46.190791    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.190791    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:46.193789    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:46.232109    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.232109    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:46.235109    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:46.269122    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.269122    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:46.273122    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:46.302130    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.302130    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:46.306119    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:46.338110    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.338110    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:46.341114    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:46.370305    1436 logs.go:282] 0 containers: []
	W1210 07:31:46.370305    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:46.370305    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:46.370305    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:46.438787    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:46.438787    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:46.605791    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:46.605791    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:46.756762    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:46.747167    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.748310    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.749473    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.750856    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.751642    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:46.747167    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.748310    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.749473    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.750856    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:46.751642    6761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:46.756762    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:46.756762    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:46.793764    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:46.793764    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:48.287161    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load": (1.0785558s)
	I1210 07:31:48.287161    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I1210 07:31:48.287161    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.3
	I1210 07:31:48.287161    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.34.3 | docker load"
	I1210 07:31:51.130300    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.34.3 | docker load": (2.8430943s)
	I1210 07:31:51.130300    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3 from cache
	I1210 07:31:51.130300    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.3
	I1210 07:31:51.130300    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.34.3 | docker load"
	I1210 07:31:52.383759    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.34.3 | docker load": (1.2534401s)
	I1210 07:31:52.383759    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3 from cache
	I1210 07:31:52.383759    2240 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1210 07:31:52.383759    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	I1210 07:31:49.381174    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:49.403703    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:49.436264    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.436317    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:49.440617    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:49.468917    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.468982    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:49.472677    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:49.499977    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.499977    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:49.504116    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:49.536309    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.536350    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:49.540463    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:49.568274    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.568274    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:49.572177    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:49.600130    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.600130    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:49.604000    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:49.632645    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.632645    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:49.636092    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:49.667017    1436 logs.go:282] 0 containers: []
	W1210 07:31:49.667017    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:49.667017    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:49.667017    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:49.705515    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:49.705515    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:49.790780    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:49.782366    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.783658    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.784961    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.786128    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.787161    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:49.782366    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.783658    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.784961    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.786128    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:49.787161    6921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:49.790780    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:49.790780    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:49.817781    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:49.817781    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:49.871600    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:49.871674    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:52.448511    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:52.475325    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:52.506360    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.506360    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:52.510172    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:52.540147    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.540147    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:52.544437    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:52.575774    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.575774    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:52.579336    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:52.610061    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.610061    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:52.613342    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:52.642765    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.642765    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:52.649215    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:52.678701    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.678701    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:52.682526    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:52.710203    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.710203    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:52.715870    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:52.745326    1436 logs.go:282] 0 containers: []
	W1210 07:31:52.745351    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:52.745351    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:52.745397    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:52.811401    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:52.811401    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:52.853138    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:52.853138    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:52.968335    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:52.959589    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.960835    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.961833    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.962872    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.963541    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:52.959589    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.960835    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.961833    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.962872    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:52.963541    7099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:52.968335    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:52.968335    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:52.995279    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:52.995802    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:31:55.245680    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (2.8618761s)
	I1210 07:31:55.245680    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1210 07:31:55.246466    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.3
	I1210 07:31:55.246522    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.34.3 | docker load"
	I1210 07:31:56.790187    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.34.3 | docker load": (1.5436405s)
	I1210 07:31:56.790187    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3 from cache
	I1210 07:31:56.790187    2240 docker.go:305] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1210 07:31:56.790187    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.12.1 | docker load"
	W1210 07:31:53.564945    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:31:55.548093    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:55.571449    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:55.603901    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.603970    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:55.607695    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:55.639065    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.639065    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:55.643536    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:55.671930    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.671930    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:55.675998    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:55.704460    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.704460    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:55.708947    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:55.739257    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.739257    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:55.742852    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:55.772295    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.772344    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:55.776423    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:55.803812    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.803812    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:55.809849    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:55.841586    1436 logs.go:282] 0 containers: []
	W1210 07:31:55.841647    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:55.841647    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:55.841647    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:55.916368    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:55.916368    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:55.958653    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:55.958653    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:56.055702    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:56.041972    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.043936    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.045926    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.047763    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.048444    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:56.041972    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.043936    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.045926    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.047763    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:56.048444    7266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:56.055702    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:56.055702    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:56.084883    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:56.084883    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:01.290113    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.12.1 | docker load": (4.4998566s)
	I1210 07:32:01.290113    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1 from cache
	I1210 07:32:01.290113    2240 docker.go:305] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.3
	I1210 07:32:01.290113    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.34.3 | docker load"
	I1210 07:31:58.642350    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:31:58.668189    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:31:58.699633    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.699633    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:31:58.705036    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:31:58.738553    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.738553    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:31:58.742579    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:31:58.772414    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.772414    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:31:58.775757    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:31:58.804872    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.804872    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:31:58.808509    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:31:58.835398    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.835398    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:31:58.843124    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:31:58.871465    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.871465    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:31:58.875535    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:31:58.905029    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.905108    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:31:58.910324    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:31:58.953100    1436 logs.go:282] 0 containers: []
	W1210 07:31:58.953100    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:31:58.953100    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:31:58.953100    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:31:59.012946    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:31:59.012946    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:31:59.052964    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:31:59.052964    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:31:59.146228    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:31:59.133962    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.135271    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.136467    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.137230    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.139872    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:31:59.133962    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.135271    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.136467    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.137230    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:31:59.139872    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:31:59.146228    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:31:59.146228    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:31:59.173200    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:31:59.173200    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:01.725170    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:01.746739    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:01.779670    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.779670    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:01.783967    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:01.812617    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.812617    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:01.817482    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:01.848083    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.848083    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:01.852344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:01.883648    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.883648    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:01.887655    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:01.918403    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.918403    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:01.922409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:01.961721    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.961721    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:01.969744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:01.998302    1436 logs.go:282] 0 containers: []
	W1210 07:32:01.998302    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:02.003804    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:02.032315    1436 logs.go:282] 0 containers: []
	W1210 07:32:02.032315    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:02.032315    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:02.032315    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:02.096900    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:02.096900    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:02.136137    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:02.136137    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:02.227732    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:02.216961    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.218197    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.219273    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.220321    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.221438    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:02.216961    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.218197    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.219273    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.220321    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:02.221438    7595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:02.227732    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:02.227732    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:02.255236    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:02.255236    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:03.670542    2240 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.34.3 | docker load": (2.3803916s)
	I1210 07:32:03.670542    2240 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3 from cache
	I1210 07:32:03.670542    2240 cache_images.go:125] Successfully loaded all cached images
	I1210 07:32:03.670542    2240 cache_images.go:94] duration metric: took 18.3531776s to LoadCachedImages
	I1210 07:32:03.670542    2240 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 docker true true} ...
	I1210 07:32:03.670542    2240 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-648600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml}
	I1210 07:32:03.674057    2240 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1210 07:32:03.753844    2240 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1210 07:32:03.753844    2240 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:32:03.753844    2240 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-648600 NodeName:custom-flannel-648600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:32:03.753844    2240 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "custom-flannel-648600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:32:03.758233    2240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1210 07:32:03.772950    2240 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.3': No such file or directory
	
	Initiating transfer...
	I1210 07:32:03.777455    2240 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.3
	I1210 07:32:03.790145    2240 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet.sha256
	I1210 07:32:03.790145    2240 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256
	I1210 07:32:03.790145    2240 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
	I1210 07:32:03.796039    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:32:03.796814    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm
	I1210 07:32:03.796843    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl
	I1210 07:32:03.817843    2240 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubeadm': No such file or directory
	I1210 07:32:03.818011    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubeadm --> /var/lib/minikube/binaries/v1.34.3/kubeadm (74027192 bytes)
	I1210 07:32:03.818298    2240 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubectl': No such file or directory
	I1210 07:32:03.818803    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubectl --> /var/lib/minikube/binaries/v1.34.3/kubectl (60563640 bytes)
	I1210 07:32:03.822978    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet
	I1210 07:32:03.833074    2240 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.3/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.3/kubelet': No such file or directory
	I1210 07:32:03.833638    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubelet --> /var/lib/minikube/binaries/v1.34.3/kubelet (59203876 bytes)
	I1210 07:32:05.838364    2240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:32:05.850364    2240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1210 07:32:05.870151    2240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 07:32:05.891336    2240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1210 07:32:05.915010    2240 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:32:05.922767    2240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:32:05.942185    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:32:06.099167    2240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:32:06.121581    2240 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600 for IP: 192.168.85.2
	I1210 07:32:06.121613    2240 certs.go:195] generating shared ca certs ...
	I1210 07:32:06.121640    2240 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.121920    2240 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1210 07:32:06.122447    2240 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1210 07:32:06.122578    2240 certs.go:257] generating profile certs ...
	I1210 07:32:06.122578    2240 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.key
	I1210 07:32:06.122578    2240 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.crt with IP's: []
	I1210 07:32:06.321440    2240 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.crt ...
	I1210 07:32:06.321440    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.crt: {Name:mk30a4977cc0d8ffd50678b3c23caa1e53531dd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.322223    2240 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.key ...
	I1210 07:32:06.322223    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\client.key: {Name:mke10982a653bbe15c8edebf2f43dc216f9268be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.323200    2240 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba
	I1210 07:32:06.323200    2240 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1210 07:32:06.341062    2240 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba ...
	I1210 07:32:06.341062    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba: {Name:mk0e9e825524eecc7aedfd18bb3bfe0b08c0466c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.342014    2240 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba ...
	I1210 07:32:06.342014    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba: {Name:mk42b80e536f4c7e07cd83fa60afbb5af1e6e8b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.342947    2240 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt.016af0ba -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt
	I1210 07:32:06.354920    2240 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key.016af0ba -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key
	I1210 07:32:06.355812    2240 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key
	I1210 07:32:06.355812    2240 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt with IP's: []
	I1210 07:32:06.438517    2240 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt ...
	I1210 07:32:06.438517    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt: {Name:mk49d63357d91f886b5db1adca8a8959ac8a2637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.439596    2240 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key ...
	I1210 07:32:06.439596    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key: {Name:mkd00fe816a16ba7636ee1faff5584095510b505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:06.454147    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem (1338 bytes)
	W1210 07:32:06.454968    2240 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304_empty.pem, impossibly tiny 0 bytes
	I1210 07:32:06.454968    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I1210 07:32:06.455228    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1210 07:32:06.455417    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1210 07:32:06.455581    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1210 07:32:06.455768    2240 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem (1708 bytes)
	I1210 07:32:06.456703    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:32:06.490234    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1210 07:32:06.516382    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:32:06.546895    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:32:06.579157    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1210 07:32:06.611194    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:32:06.642582    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:32:06.673947    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-flannel-648600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:32:06.702762    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\11304.pem --> /usr/share/ca-certificates/11304.pem (1338 bytes)
	I1210 07:32:06.734932    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\113042.pem --> /usr/share/ca-certificates/113042.pem (1708 bytes)
	I1210 07:32:06.763895    2240 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:32:06.794884    2240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:32:06.824804    2240 ssh_runner.go:195] Run: openssl version
	I1210 07:32:06.839620    2240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.863187    2240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11304.pem /etc/ssl/certs/11304.pem
	I1210 07:32:06.881235    2240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.889982    2240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 05:48 /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.896266    2240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11304.pem
	I1210 07:32:06.945361    2240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:32:06.965592    2240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11304.pem /etc/ssl/certs/51391683.0
	I1210 07:32:06.982615    2240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.000345    2240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/113042.pem /etc/ssl/certs/113042.pem
	I1210 07:32:07.019650    2240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.028440    2240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 05:48 /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.032681    2240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113042.pem
	I1210 07:32:07.080664    2240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:32:07.098781    2240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/113042.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:32:07.119820    2240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.138968    2240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:32:07.157588    2240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.166110    2240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 05:31 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.169123    2240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:32:07.218939    2240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:32:07.238245    2240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:32:07.255844    2240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:32:07.263714    2240 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:32:07.263714    2240 kubeadm.go:401] StartCluster: {Name:custom-flannel-648600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:custom-flannel-648600 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\kube-flannel.yaml} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCore
DNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:32:07.267520    2240 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1210 07:32:07.300048    2240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:32:07.317060    2240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:32:07.333647    2240 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:32:07.337744    2240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:32:07.353638    2240 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:32:07.353638    2240 kubeadm.go:158] found existing configuration files:
	
	I1210 07:32:07.357869    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:32:07.371538    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:32:07.375620    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:32:07.392582    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:32:07.408459    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:32:07.412872    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:32:07.431340    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:32:07.446697    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:32:07.451332    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:32:07.472431    2240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	W1210 07:32:03.602967    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:04.810034    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:04.838035    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:04.888039    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.888039    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:04.892025    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:04.955032    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.955032    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:04.959038    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:04.995031    1436 logs.go:282] 0 containers: []
	W1210 07:32:04.995031    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:04.999034    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:05.035036    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.035036    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:05.040047    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:05.079034    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.079034    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:05.084038    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:05.123032    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.123032    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:05.128035    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:05.165033    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.165033    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:05.169028    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:05.205183    1436 logs.go:282] 0 containers: []
	W1210 07:32:05.205183    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:05.205183    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:05.205183    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:05.248358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:05.248358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:05.349366    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:05.339165    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.340548    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.341613    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.342697    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.343919    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:05.339165    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.340548    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.341613    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.342697    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:05.343919    7758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:05.349366    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:05.349366    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:05.384377    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:05.384377    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:05.439383    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:05.439383    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:08.021198    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:08.045549    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:08.076568    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.076568    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:08.082429    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:08.113514    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.113514    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:08.117280    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:08.145243    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.145243    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:08.151846    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:08.182475    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.182475    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:08.186570    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:08.214500    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.214554    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:08.218698    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:08.250229    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.250229    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:08.254493    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:08.298394    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.298394    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:08.302457    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:08.331561    1436 logs.go:282] 0 containers: []
	W1210 07:32:08.331561    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:08.331561    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:08.331561    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:08.368913    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:08.368913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 07:32:07.487983    2240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:32:07.492242    2240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:32:07.510557    2240 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:32:07.626646    2240 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1210 07:32:07.630270    2240 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1210 07:32:07.725615    2240 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1210 07:32:08.453343    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:08.442562    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.443722    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.445918    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.447466    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.448822    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:08.442562    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.443722    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.445918    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.447466    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:08.448822    7921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:08.453378    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:08.453417    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:08.488219    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:08.488219    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:08.533777    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:08.533777    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:11.100898    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:11.123310    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:11.154369    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.154369    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:11.158211    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:11.188349    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.188419    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:11.191999    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:11.218233    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.218263    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:11.222177    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:11.248157    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.248157    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:11.252075    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:11.280934    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.280934    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:11.284871    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:11.316173    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.316225    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:11.320150    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:11.350432    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.350494    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:11.354282    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:11.381767    1436 logs.go:282] 0 containers: []
	W1210 07:32:11.381819    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:11.381819    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:11.381874    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:11.447079    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:11.447079    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:11.485987    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:11.485987    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:11.568313    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:11.555927    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.557482    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.559214    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.561941    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.562681    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:11.555927    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.557482    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.559214    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.561941    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:11.562681    8091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:11.568365    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:11.568408    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:11.599474    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:11.599518    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:13.641314    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:14.165429    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:14.189363    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:14.220411    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.220478    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:14.223878    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:14.253748    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.253798    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:14.257409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:14.288235    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.288235    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:14.291689    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:14.323349    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.323349    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:14.326680    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:14.355227    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.355227    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:14.358704    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:14.389648    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.389648    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:14.393032    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:14.424212    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.424212    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:14.427425    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:14.457834    1436 logs.go:282] 0 containers: []
	W1210 07:32:14.457834    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:14.457834    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:14.457834    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:14.486053    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:14.486053    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:14.538138    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:14.538138    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:14.601542    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:14.601542    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:14.638885    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:14.638885    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:14.724482    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:14.715398    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.716451    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.717228    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.719965    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.720942    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:14.715398    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.716451    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.717228    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.719965    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:14.720942    8282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:17.229775    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:17.254115    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:17.287113    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.287113    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:17.292389    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:17.321661    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.321661    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:17.325615    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:17.360140    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.360140    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:17.366346    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:17.402963    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.402963    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:17.406830    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:17.436210    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.436210    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:17.440638    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:17.468315    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.468315    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:17.473002    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:17.516057    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.516057    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:17.519835    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:17.546705    1436 logs.go:282] 0 containers: []
	W1210 07:32:17.546705    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:17.546705    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:17.546705    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:17.575272    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:17.575272    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:17.635882    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:17.635882    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:17.702984    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:17.702984    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:17.738444    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:17.738444    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:17.826329    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:17.816355    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.817532    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.818909    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.821510    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.822595    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:17.816355    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.817532    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.818909    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.821510    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:17.822595    8449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:20.331491    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:20.356562    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:20.393733    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.393733    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:20.397542    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:20.424969    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.424969    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:20.430097    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:20.461163    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.461163    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:20.464553    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:20.496041    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.496041    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:20.500386    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:20.528481    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.528481    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:20.533192    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:20.563678    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.563678    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:20.567914    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:20.595909    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.595909    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:20.601427    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:20.633125    1436 logs.go:282] 0 containers: []
	W1210 07:32:20.633125    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:20.633125    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:20.633125    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:20.698742    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:20.698742    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:20.738675    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:20.738675    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:20.832925    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:20.823171    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.824395    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.825664    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.826500    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.828755    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:20.823171    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.824395    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.825664    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.826500    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:20.828755    8594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:20.833019    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:20.833050    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:20.863741    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:20.863802    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:23.679657    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:23.424742    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:23.449719    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:23.484921    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.484982    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:23.488818    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:23.520632    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.520718    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:23.525648    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:23.557856    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.557856    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:23.561789    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:23.593782    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.593782    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:23.596770    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:23.629689    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.629689    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:23.633972    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:23.677648    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.677648    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:23.681665    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:23.708735    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.708735    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:23.712484    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:23.742324    1436 logs.go:282] 0 containers: []
	W1210 07:32:23.742324    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:23.742324    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:23.742324    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:23.809315    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:23.809315    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:23.849820    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:23.849820    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:23.932812    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:23.923514    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.925667    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.926732    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.928140    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.929123    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:23.923514    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.925667    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.926732    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.928140    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:23.929123    8758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:23.932860    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:23.932896    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:23.962977    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:23.962977    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:26.517198    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:26.545066    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:26.577323    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.577323    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:26.581824    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:26.621178    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.621178    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:26.624162    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:26.657711    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.657711    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:26.661872    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:26.690869    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.690869    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:26.693873    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:26.720949    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.720949    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:26.724289    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:26.757254    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.757254    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:26.761433    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:26.788617    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.788617    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:26.792015    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:26.820229    1436 logs.go:282] 0 containers: []
	W1210 07:32:26.820229    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:26.820229    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:26.820229    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:26.886805    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:26.886805    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:26.926531    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:26.926531    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:27.014343    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:27.001829    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.003812    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.004706    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.007231    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.008445    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:27.001829    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.003812    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.004706    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.007231    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:27.008445    8919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:27.014420    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:27.014490    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:27.043375    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:27.043375    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:29.223517    2240 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1210 07:32:29.223517    2240 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:32:29.223517    2240 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:32:29.223517    2240 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:32:29.224269    2240 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:32:29.224467    2240 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:32:29.229027    2240 out.go:252]   - Generating certificates and keys ...
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:32:29.229027    2240 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:32:29.229660    2240 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:32:29.229827    2240 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:32:29.229911    2240 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:32:29.229911    2240 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-648600 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 07:32:29.229911    2240 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:32:29.230468    2240 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-648600 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1210 07:32:29.230658    2240 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:32:29.230768    2240 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:32:29.230900    2240 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:32:29.230947    2240 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:32:29.231503    2240 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:32:29.231582    2240 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:32:29.231582    2240 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:32:29.234181    2240 out.go:252]   - Booting up control plane ...
	I1210 07:32:29.234181    2240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:32:29.234181    2240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:32:29.234181    2240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:32:29.234702    2240 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:32:29.234874    2240 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:32:29.234874    2240 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:32:29.235782    2240 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:32:29.235782    2240 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002366911s
	I1210 07:32:29.235782    2240 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 07:32:29.235782    2240 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1210 07:32:29.236404    2240 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 07:32:29.236404    2240 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 07:32:29.236404    2240 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 12.235267696s
	I1210 07:32:29.236992    2240 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 12.434241439s
	I1210 07:32:29.236992    2240 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 14.5023353s
	I1210 07:32:29.236992    2240 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 07:32:29.236992    2240 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 07:32:29.237590    2240 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 07:32:29.237590    2240 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-648600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 07:32:29.237590    2240 kubeadm.go:319] [bootstrap-token] Using token: a4ld74.20ve6i3rm5ksexxo
	I1210 07:32:29.239648    2240 out.go:252]   - Configuring RBAC rules ...
	I1210 07:32:29.239648    2240 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 07:32:29.240674    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 07:32:29.240944    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 07:32:29.241383    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 07:32:29.241649    2240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 07:32:29.241668    2240 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 07:32:29.241668    2240 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 07:32:29.241668    2240 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 07:32:29.241668    2240 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 07:32:29.242197    2240 kubeadm.go:319] 
	I1210 07:32:29.242265    2240 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 07:32:29.242265    2240 kubeadm.go:319] 
	I1210 07:32:29.242265    2240 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 07:32:29.242265    2240 kubeadm.go:319] 
	I1210 07:32:29.242265    2240 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 07:32:29.242265    2240 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 07:32:29.242265    2240 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 07:32:29.242265    2240 kubeadm.go:319] 
	I1210 07:32:29.242850    2240 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 07:32:29.242850    2240 kubeadm.go:319] 
	I1210 07:32:29.242850    2240 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 07:32:29.242850    2240 kubeadm.go:319] 
	I1210 07:32:29.242850    2240 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 07:32:29.242850    2240 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 07:32:29.242850    2240 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 07:32:29.242850    2240 kubeadm.go:319] 
	I1210 07:32:29.243436    2240 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 07:32:29.243436    2240 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 07:32:29.243436    2240 kubeadm.go:319] 
	I1210 07:32:29.243436    2240 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token a4ld74.20ve6i3rm5ksexxo \
	I1210 07:32:29.243436    2240 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2501b4ef72a9792bee16677b10e6bb41bd6980e44395cdcd843b1bcd1dba3c3b \
	I1210 07:32:29.243436    2240 kubeadm.go:319] 	--control-plane 
	I1210 07:32:29.243436    2240 kubeadm.go:319] 
	I1210 07:32:29.244018    2240 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 07:32:29.244018    2240 kubeadm.go:319] 
	I1210 07:32:29.244018    2240 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token a4ld74.20ve6i3rm5ksexxo \
	I1210 07:32:29.244018    2240 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2501b4ef72a9792bee16677b10e6bb41bd6980e44395cdcd843b1bcd1dba3c3b 
	I1210 07:32:29.244018    2240 cni.go:84] Creating CNI manager for "testdata\\kube-flannel.yaml"
	I1210 07:32:29.246745    2240 out.go:179] * Configuring testdata\kube-flannel.yaml (Container Networking Interface) ...
	I1210 07:32:29.266121    2240 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1210 07:32:29.270492    2240 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1210 07:32:29.280075    2240 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1210 07:32:29.280075    2240 ssh_runner.go:362] scp testdata\kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4578 bytes)
	I1210 07:32:29.314572    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 07:32:29.754597    2240 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 07:32:29.759607    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-648600 minikube.k8s.io/updated_at=2025_12_10T07_32_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=38b80b624854eb385412e6286b2192fdf6d0d602 minikube.k8s.io/name=custom-flannel-648600 minikube.k8s.io/primary=true
	I1210 07:32:29.759607    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:29.770603    2240 ops.go:34] apiserver oom_adj: -16
	I1210 07:32:29.895974    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:30.395328    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:30.896828    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:31.396414    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:31.896200    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:32.396778    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:29.599594    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:29.627372    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:29.659982    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.659982    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:29.662983    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:29.694702    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.694702    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:29.700318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:29.732602    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.732602    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:29.735594    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:29.769602    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.769602    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:29.773601    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:29.805199    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.805199    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:29.808179    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:29.838578    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.838578    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:29.843641    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:29.878051    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.878051    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:29.881052    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:29.921782    1436 logs.go:282] 0 containers: []
	W1210 07:32:29.921782    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:29.921782    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:29.921782    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:29.991328    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:29.991328    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:30.030358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:30.031358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:30.117974    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:30.107541    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.108605    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.109589    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.110674    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.111579    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:30.107541    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.108605    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.109589    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.110674    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:30.111579    9085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:30.118027    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:30.118027    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:30.147934    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:30.147934    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:32.704372    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:32.727813    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:32.762114    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.762228    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:32.767248    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:32.801905    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.801968    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:32.805939    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:32.836433    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.836579    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:32.840369    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:32.870265    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.870265    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:32.874049    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:32.904540    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.904540    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:32.908658    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:32.937325    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.937407    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:32.941191    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:32.974829    1436 logs.go:282] 0 containers: []
	W1210 07:32:32.974893    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:32.980307    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:33.012207    1436 logs.go:282] 0 containers: []
	W1210 07:32:33.012268    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:33.012288    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:33.012288    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:33.062151    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:33.062151    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:33.126084    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:33.126084    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:33.164564    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:33.164564    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:33.252175    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:33.238911    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.239860    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.243396    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.244685    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.245447    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:33.238911    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.239860    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.243396    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.244685    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:33.245447    9265 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:33.252175    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:33.252175    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:32.894984    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:33.397040    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:33.895777    2240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:32:34.084987    2240 kubeadm.go:1114] duration metric: took 4.3302518s to wait for elevateKubeSystemPrivileges
	I1210 07:32:34.085013    2240 kubeadm.go:403] duration metric: took 26.8208803s to StartCluster
	I1210 07:32:34.085095    2240 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:34.085299    2240 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 07:32:34.087295    2240 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:32:34.088397    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 07:32:34.088397    2240 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1210 07:32:34.088932    2240 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:32:34.089115    2240 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-648600"
	I1210 07:32:34.089115    2240 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-648600"
	I1210 07:32:34.089115    2240 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-648600"
	I1210 07:32:34.089115    2240 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-648600"
	I1210 07:32:34.089272    2240 host.go:66] Checking if "custom-flannel-648600" exists ...
	I1210 07:32:34.089454    2240 config.go:182] Loaded profile config "custom-flannel-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 07:32:34.091048    2240 out.go:179] * Verifying Kubernetes components...
	I1210 07:32:34.099313    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:32:34.100384    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:32:34.101389    2240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:32:34.165121    2240 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-648600"
	I1210 07:32:34.165121    2240 host.go:66] Checking if "custom-flannel-648600" exists ...
	I1210 07:32:34.166107    2240 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:32:34.174109    2240 cli_runner.go:164] Run: docker container inspect custom-flannel-648600 --format={{.State.Status}}
	I1210 07:32:34.177116    2240 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:32:34.177116    2240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:32:34.181109    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:32:34.228110    2240 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:32:34.228110    2240 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:32:34.231111    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:32:34.232110    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:32:34.295102    2240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58200 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-flannel-648600\id_rsa Username:docker}
	I1210 07:32:34.361698    2240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 07:32:34.577307    2240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:32:34.743911    2240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:32:34.748484    2240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:32:35.145540    2240 start.go:977] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1210 07:32:35.149854    2240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-flannel-648600
	I1210 07:32:35.210514    2240 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-648600" to be "Ready" ...
	I1210 07:32:35.684992    2240 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-648600" context rescaled to 1 replicas
	I1210 07:32:35.860846    2240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1123448s)
	I1210 07:32:35.863841    2240 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1210 07:32:35.869842    2240 addons.go:530] duration metric: took 1.7814171s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1210 07:32:37.217134    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	W1210 07:32:33.712552    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:35.789401    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:35.810140    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:35.846049    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.846049    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:35.850173    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:35.881840    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.881840    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:35.884841    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:35.913190    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.913190    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:35.916698    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:35.953160    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.953160    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:35.956661    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:35.990725    1436 logs.go:282] 0 containers: []
	W1210 07:32:35.990725    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:35.994362    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:36.027153    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.027153    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:36.031157    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:36.060142    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.060142    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:36.063139    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:36.096214    1436 logs.go:282] 0 containers: []
	W1210 07:32:36.096291    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:36.096291    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:36.096291    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:36.136455    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:36.136455    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:36.228827    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:36.215892    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.217040    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.218413    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.219992    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.221006    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:36.215892    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.217040    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.218413    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.219992    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:36.221006    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:36.228910    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:36.228944    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:36.260979    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:36.261040    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:36.321946    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:36.321946    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:32:39.747934    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	W1210 07:32:42.215582    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	I1210 07:32:38.893525    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:38.918010    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:38.951682    1436 logs.go:282] 0 containers: []
	W1210 07:32:38.951682    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:38.954817    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:38.986714    1436 logs.go:282] 0 containers: []
	W1210 07:32:38.986714    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:38.992805    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:39.024242    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.024242    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:39.028333    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:39.057504    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.057504    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:39.063178    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:39.093362    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.093362    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:39.097488    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:39.130652    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.130690    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:39.133596    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:39.163556    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.163556    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:39.168915    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:39.202587    1436 logs.go:282] 0 containers: []
	W1210 07:32:39.202587    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:39.202587    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:39.202587    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:39.268647    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:39.268647    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:39.308297    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:39.308297    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:39.438181    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:39.395105    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.396191    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.398354    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.399763    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.401460    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:39.395105    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.396191    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.398354    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.399763    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:39.401460    9580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:39.438181    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:39.438181    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:39.467128    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:39.467176    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:42.023591    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:42.047765    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:42.080166    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.080166    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:42.084928    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:42.114905    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.114905    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:42.118820    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:42.148212    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.148212    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:42.151728    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:42.182256    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.182256    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:42.185843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:42.216232    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.216276    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:42.219555    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:42.249214    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.249214    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:42.253469    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:42.281977    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.281977    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:42.285971    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:42.313212    1436 logs.go:282] 0 containers: []
	W1210 07:32:42.314210    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:42.314210    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:42.314210    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:42.382226    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:42.382226    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:42.424358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:42.424358    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:42.509116    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:42.500360    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.501554    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.503040    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.504307    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.505418    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:42.500360    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.501554    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.503040    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.504307    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:42.505418    9759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:42.509116    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:42.509116    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:42.536096    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:42.536096    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:44.217341    2240 node_ready.go:57] node "custom-flannel-648600" has "Ready":"False" status (will retry)
	I1210 07:32:45.217929    2240 node_ready.go:49] node "custom-flannel-648600" is "Ready"
	I1210 07:32:45.217929    2240 node_ready.go:38] duration metric: took 10.0071872s for node "custom-flannel-648600" to be "Ready" ...
	I1210 07:32:45.217929    2240 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:32:45.221913    2240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:45.241224    2240 api_server.go:72] duration metric: took 11.1520714s to wait for apiserver process to appear ...
	I1210 07:32:45.241248    2240 api_server.go:88] waiting for apiserver healthz status ...
	I1210 07:32:45.241297    2240 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58199/healthz ...
	I1210 07:32:45.255531    2240 api_server.go:279] https://127.0.0.1:58199/healthz returned 200:
	ok
	I1210 07:32:45.259632    2240 api_server.go:141] control plane version: v1.34.3
	I1210 07:32:45.259696    2240 api_server.go:131] duration metric: took 18.4479ms to wait for apiserver health ...
	I1210 07:32:45.259716    2240 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 07:32:45.268791    2240 system_pods.go:59] 7 kube-system pods found
	I1210 07:32:45.268849    2240 system_pods.go:61] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.268849    2240 system_pods.go:61] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.268894    2240 system_pods.go:61] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.268894    2240 system_pods.go:74] duration metric: took 9.14ms to wait for pod list to return data ...
	I1210 07:32:45.268935    2240 default_sa.go:34] waiting for default service account to be created ...
	I1210 07:32:45.273316    2240 default_sa.go:45] found service account: "default"
	I1210 07:32:45.273353    2240 default_sa.go:55] duration metric: took 4.4181ms for default service account to be created ...
	I1210 07:32:45.273353    2240 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 07:32:45.280767    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:45.280945    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.280945    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.280998    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.281064    2240 retry.go:31] will retry after 250.377545ms: missing components: kube-dns
	I1210 07:32:45.539061    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:45.539616    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.539616    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.539616    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.539718    2240 retry.go:31] will retry after 289.337772ms: missing components: kube-dns
	I1210 07:32:45.840329    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:45.840329    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:45.840329    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:45.840329    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:45.840528    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:45.840528    2240 retry.go:31] will retry after 309.196772ms: missing components: kube-dns
	I1210 07:32:46.157293    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:46.157293    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:46.157293    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:46.157293    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:46.157293    2240 retry.go:31] will retry after 407.04525ms: missing components: kube-dns
	I1210 07:32:46.592154    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:46.592265    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:46.592265    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:46.592297    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:46.592297    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:46.592318    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:46.592318    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:46.592318    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:32:46.592318    2240 retry.go:31] will retry after 495.94184ms: missing components: kube-dns
	I1210 07:32:47.094557    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:47.094557    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:47.094557    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:47.094557    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:47.094557    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:47.095074    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:47.095074    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:47.095074    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Running
	I1210 07:32:47.095074    2240 retry.go:31] will retry after 778.892273ms: missing components: kube-dns
	W1210 07:32:43.745046    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:45.087059    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:45.110662    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:45.142133    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.142133    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:45.146341    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:45.178232    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.178232    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:45.182428    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:45.211507    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.211507    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:45.215400    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:45.245805    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.246346    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:45.251790    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:45.299793    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.299793    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:45.304394    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:45.332689    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.332689    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:45.338438    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:45.371989    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.372039    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:45.376951    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:45.411498    1436 logs.go:282] 0 containers: []
	W1210 07:32:45.411558    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:45.411558    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:45.411617    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:45.488591    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:45.489591    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:45.529135    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:45.529135    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:45.627238    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:45.616715    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.617907    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.618773    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.622470    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.623968    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:45.616715    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.617907    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.618773    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.622470    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:45.623968    9922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:45.627238    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:45.627238    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:45.659505    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:45.659505    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:48.224164    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:48.247748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:48.276146    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.276253    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:48.279224    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:48.307561    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.307587    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:48.311247    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:48.342268    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.342268    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:48.346481    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:48.379504    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.379504    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:48.384265    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:47.881744    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:47.881744    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:32:47.881744    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:47.881744    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Running
	I1210 07:32:47.882297    2240 retry.go:31] will retry after 913.098856ms: missing components: kube-dns
	I1210 07:32:48.802046    2240 system_pods.go:86] 7 kube-system pods found
	I1210 07:32:48.802046    2240 system_pods.go:89] "coredns-66bc5c9577-dhgpj" [f80e9eec-915d-4a0b-a795-81f90a51d4be] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "etcd-custom-flannel-648600" [aa8349ca-6671-4947-91e7-244bb810adbb] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-apiserver-custom-flannel-648600" [0c4d3f3c-0914-43ac-8f95-229996b686bd] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-controller-manager-custom-flannel-648600" [7ba9a851-25b1-4b69-83e7-a0bc3a352054] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-proxy-vrrgr" [16a2ab63-e04d-4513-a211-405cba515a2d] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "kube-scheduler-custom-flannel-648600" [24843a00-f2ac-4838-8cfd-934f6ef711a8] Running
	I1210 07:32:48.802046    2240 system_pods.go:89] "storage-provisioner" [6c4788d8-979d-4ba1-a63e-82aa2665c6b2] Running
	I1210 07:32:48.802046    2240 system_pods.go:126] duration metric: took 3.5286376s to wait for k8s-apps to be running ...
	I1210 07:32:48.802046    2240 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 07:32:48.807470    2240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:32:48.825598    2240 system_svc.go:56] duration metric: took 23.5517ms WaitForService to wait for kubelet
	I1210 07:32:48.825598    2240 kubeadm.go:587] duration metric: took 14.7364354s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:32:48.825689    2240 node_conditions.go:102] verifying NodePressure condition ...
	I1210 07:32:48.831503    2240 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1210 07:32:48.831503    2240 node_conditions.go:123] node cpu capacity is 16
	I1210 07:32:48.831503    2240 node_conditions.go:105] duration metric: took 5.8138ms to run NodePressure ...
	I1210 07:32:48.831503    2240 start.go:242] waiting for startup goroutines ...
	I1210 07:32:48.831503    2240 start.go:247] waiting for cluster config update ...
	I1210 07:32:48.831503    2240 start.go:256] writing updated cluster config ...
	I1210 07:32:48.837195    2240 ssh_runner.go:195] Run: rm -f paused
	I1210 07:32:48.844148    2240 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:32:48.853005    2240 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dhgpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.864384    2240 pod_ready.go:94] pod "coredns-66bc5c9577-dhgpj" is "Ready"
	I1210 07:32:48.864472    2240 pod_ready.go:86] duration metric: took 11.4282ms for pod "coredns-66bc5c9577-dhgpj" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.867887    2240 pod_ready.go:83] waiting for pod "etcd-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.876367    2240 pod_ready.go:94] pod "etcd-custom-flannel-648600" is "Ready"
	I1210 07:32:48.876367    2240 pod_ready.go:86] duration metric: took 8.4794ms for pod "etcd-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.880884    2240 pod_ready.go:83] waiting for pod "kube-apiserver-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.888453    2240 pod_ready.go:94] pod "kube-apiserver-custom-flannel-648600" is "Ready"
	I1210 07:32:48.888453    2240 pod_ready.go:86] duration metric: took 7.5694ms for pod "kube-apiserver-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:48.891939    2240 pod_ready.go:83] waiting for pod "kube-controller-manager-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:49.254863    2240 pod_ready.go:94] pod "kube-controller-manager-custom-flannel-648600" is "Ready"
	I1210 07:32:49.255015    2240 pod_ready.go:86] duration metric: took 363.0699ms for pod "kube-controller-manager-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:49.454047    2240 pod_ready.go:83] waiting for pod "kube-proxy-vrrgr" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:49.854254    2240 pod_ready.go:94] pod "kube-proxy-vrrgr" is "Ready"
	I1210 07:32:49.854329    2240 pod_ready.go:86] duration metric: took 400.2758ms for pod "kube-proxy-vrrgr" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.054101    2240 pod_ready.go:83] waiting for pod "kube-scheduler-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.453713    2240 pod_ready.go:94] pod "kube-scheduler-custom-flannel-648600" is "Ready"
	I1210 07:32:50.453713    2240 pod_ready.go:86] duration metric: took 399.6056ms for pod "kube-scheduler-custom-flannel-648600" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:32:50.453713    2240 pod_ready.go:40] duration metric: took 1.6095401s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:32:50.552047    2240 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1210 07:32:50.555856    2240 out.go:179] * Done! kubectl is now configured to use "custom-flannel-648600" cluster and "default" namespace by default
	I1210 07:32:48.417490    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.417490    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:48.420482    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:48.463340    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.463340    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:48.466961    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:48.498101    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.498101    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:48.501771    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:48.532099    1436 logs.go:282] 0 containers: []
	W1210 07:32:48.532099    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:48.532099    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:48.532099    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:48.612165    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:48.602526   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.604459   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.605839   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.608058   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.609316   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:48.602526   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.604459   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.605839   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.608058   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:48.609316   10090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:48.612165    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:48.612165    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:48.639467    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:48.639467    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:48.708307    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:48.708378    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:48.769132    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:48.769193    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:51.313991    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:51.338965    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:51.379596    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.379666    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:51.384637    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:51.439084    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.439084    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:51.443082    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:51.481339    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.481375    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:51.485798    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:51.515086    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.515086    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:51.519086    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:51.549657    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.549745    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:51.553762    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:51.594636    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.594636    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:51.601112    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:51.634850    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.634897    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:51.638417    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:51.668658    1436 logs.go:282] 0 containers: []
	W1210 07:32:51.668658    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:51.668658    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:51.668658    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:51.743421    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:51.743421    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:51.785980    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:51.785980    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:51.881612    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:51.875319   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.876439   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.877390   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.878342   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.879220   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:51.875319   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.876439   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.877390   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.878342   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:51.879220   10264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:51.881612    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:51.881612    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:51.915211    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:51.915211    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:32:53.781958    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:32:54.477323    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:54.503322    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:54.543324    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.543324    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:54.547318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:54.584329    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.584329    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:54.588316    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:54.620313    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.620313    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:54.623313    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:54.656331    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.656331    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:54.662335    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:54.698319    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.698319    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:54.702320    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:54.730323    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.730323    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:54.734335    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:54.767329    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.767329    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:54.772326    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:54.807324    1436 logs.go:282] 0 containers: []
	W1210 07:32:54.807324    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:54.807324    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:54.807324    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:54.885116    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:54.885116    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:32:54.922078    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:54.922078    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:55.025433    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:55.017851   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.018872   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.019791   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.020881   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.021812   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:55.017851   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.018872   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.019791   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.020881   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:55.021812   10432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:55.025433    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:55.025433    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:55.062949    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:55.062949    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:57.627400    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:32:57.652685    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:32:57.682605    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.682695    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:32:57.687397    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:32:57.715588    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.715643    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:32:57.719155    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:32:57.746386    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.746433    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:32:57.751074    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:32:57.786162    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.786225    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:32:57.790161    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:32:57.821543    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.821543    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:32:57.825865    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:32:57.854873    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.854873    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:32:57.858370    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:32:57.908764    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.908764    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:32:57.912923    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:32:57.943110    1436 logs.go:282] 0 containers: []
	W1210 07:32:57.943156    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:32:57.943156    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:32:57.943220    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:32:58.044764    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:32:58.032727   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.034310   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.035457   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.038242   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.039578   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:32:58.032727   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.034310   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.035457   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.038242   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:32:58.039578   10587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:32:58.044764    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:32:58.044764    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:32:58.074136    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:32:58.074136    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:32:58.130739    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:32:58.130739    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:32:58.198319    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:32:58.198319    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:00.746286    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:00.773024    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:00.801991    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.801991    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:00.806103    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:00.839474    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.839538    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:00.843748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:00.872704    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.872704    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:00.879471    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:00.910099    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.910099    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:00.913675    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:00.942535    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.942587    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:00.946706    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:00.978075    1436 logs.go:282] 0 containers: []
	W1210 07:33:00.978075    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:00.981585    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:01.010831    1436 logs.go:282] 0 containers: []
	W1210 07:33:01.010862    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:01.014542    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:01.046630    1436 logs.go:282] 0 containers: []
	W1210 07:33:01.046630    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:01.046630    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:01.046630    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:01.110794    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:01.110794    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:01.152129    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:01.152129    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:01.244044    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:01.232452   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.233728   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.234672   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.238201   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.239249   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:01.232452   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.233728   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.234672   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.238201   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:01.239249   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:01.244044    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:01.244044    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:01.278465    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:01.278465    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:33:03.818627    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:03.833114    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:03.855801    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:03.886510    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.886573    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:03.890099    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:03.920839    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.920839    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:03.927061    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:03.956870    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.956870    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:03.960568    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:03.992698    1436 logs.go:282] 0 containers: []
	W1210 07:33:03.992784    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:03.996483    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:04.027029    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.027149    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:04.030240    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:04.063615    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.063615    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:04.067578    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:04.097874    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.097921    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:04.102194    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:04.133751    1436 logs.go:282] 0 containers: []
	W1210 07:33:04.133751    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:04.133751    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:04.133751    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:04.200457    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:04.200457    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:04.240408    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:04.240408    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:04.321404    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:04.310792   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.311874   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.312796   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.314967   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.316599   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:04.310792   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.311874   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.312796   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.314967   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:04.316599   10933 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:04.321404    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:04.321404    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:04.348691    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:04.348788    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:06.910838    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:06.942433    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:06.977118    1436 logs.go:282] 0 containers: []
	W1210 07:33:06.977156    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:06.981007    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:07.010984    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.010984    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:07.015418    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:07.044766    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.044766    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:07.048710    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:07.081347    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.081347    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:07.085264    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:07.120524    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.120524    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:07.125158    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:07.162231    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.162231    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:07.167511    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:07.199783    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.199783    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:07.203843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:07.237945    1436 logs.go:282] 0 containers: []
	W1210 07:33:07.237945    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:07.237945    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:07.237945    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:07.303014    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:07.303014    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:07.339790    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:07.339790    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:07.433533    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:07.422610   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.423608   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.426487   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.427518   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.428665   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:07.422610   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.423608   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.426487   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.427518   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:07.428665   11099 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:07.433578    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:07.433622    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:07.463534    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:07.463534    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:10.019483    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:10.042553    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:10.075861    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.075861    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:10.079883    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:10.112806    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.112855    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:10.118076    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:10.149529    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.149529    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:10.154764    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:10.183943    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.183943    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:10.188277    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:10.225075    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.225109    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:10.229148    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:10.258752    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.258831    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:10.262260    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:10.290375    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.290375    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:10.294114    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:10.324184    1436 logs.go:282] 0 containers: []
	W1210 07:33:10.324184    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:10.324184    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:10.324257    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:10.389060    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:10.389060    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:10.428762    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:10.428762    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:10.512419    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:10.502106   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.503175   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.504155   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.507624   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.508833   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:10.502106   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.503175   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.504155   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.507624   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:10.508833   11266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:10.512419    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:10.512419    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:10.539151    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:10.539151    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:13.096376    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:13.120463    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:13.154821    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.154821    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:13.158241    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:13.186136    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.186172    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:13.190126    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:13.217850    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.217850    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:13.220856    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:13.254422    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.254422    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:13.258405    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:13.290565    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.290650    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:13.294141    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:13.324205    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.324205    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:13.327944    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:13.359148    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.359148    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:13.363435    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:13.394783    1436 logs.go:282] 0 containers: []
	W1210 07:33:13.394783    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:13.394783    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:13.394783    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 07:33:13.858746    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:13.472122    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:13.472122    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:13.512554    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:13.512554    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:13.606866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:13.598440   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.599732   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.600869   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.602040   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.603324   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:13.598440   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.599732   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.600869   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.602040   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:13.603324   11431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:13.606866    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:13.606866    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:13.640509    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:13.640509    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:16.200969    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:16.227853    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:16.259466    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.259503    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:16.263863    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:16.305661    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.305714    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:16.309344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:16.349702    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.349702    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:16.354239    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:16.389642    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.389669    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:16.393404    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:16.422749    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.422749    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:16.428043    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:16.462871    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.462871    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:16.466863    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:16.500036    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.500036    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:16.505217    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:16.545533    1436 logs.go:282] 0 containers: []
	W1210 07:33:16.545563    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:16.545563    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:16.545640    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:16.616718    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:16.616718    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:16.662358    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:16.662414    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:16.771496    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:16.759784   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.760601   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.764427   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.766044   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.767471   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:16.759784   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.760601   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.764427   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.766044   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:16.767471   11594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:16.771539    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:16.771539    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:16.802169    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:16.802169    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:19.361839    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:19.384627    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:19.418054    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.418054    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:19.423334    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:19.449315    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.450326    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:19.453336    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:19.479318    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.479318    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:19.483409    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:19.515568    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.515568    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:19.518948    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:19.547403    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.547403    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:19.550914    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:19.582586    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.582643    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:19.586506    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:19.617655    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.617655    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:19.621651    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:19.653692    1436 logs.go:282] 0 containers: []
	W1210 07:33:19.653797    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:19.653820    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:19.653820    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:19.720756    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:19.720756    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:19.788168    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:19.788168    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:19.825175    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:19.825175    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:19.937176    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:19.910996   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912147   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912626   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.933700   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.934740   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:19.910996   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912147   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.912626   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.933700   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:19.934740   11780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:19.938191    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:19.938191    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:22.472081    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:22.499318    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:22.535642    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.535642    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:22.540234    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:22.575580    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.575580    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:22.578579    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:22.611585    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.612584    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:22.615587    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:22.645600    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.645600    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:22.649593    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:22.680588    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.680588    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:22.684584    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:22.713587    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.713587    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:22.716592    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:22.745591    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.745591    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:22.748591    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:22.777133    1436 logs.go:282] 0 containers: []
	W1210 07:33:22.777133    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:22.777133    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:22.777133    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:22.866913    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:22.856823   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.858179   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.859339   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860428   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860817   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:22.856823   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.858179   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.859339   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860428   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:22.860817   11918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:22.866913    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:22.866913    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:22.895817    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:22.895817    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:22.963449    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:22.964449    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:23.024022    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:23.024022    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:33:23.891822    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:25.581257    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:25.606450    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:25.638465    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.638465    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:25.641459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:25.675461    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.675461    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:25.678460    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:25.712472    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.712472    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:25.715460    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:25.742469    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.742469    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:25.745459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:25.778468    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.778468    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:25.782466    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:25.810470    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.810470    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:25.813459    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:25.842959    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.843962    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:25.846951    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:25.879265    1436 logs.go:282] 0 containers: []
	W1210 07:33:25.879265    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:25.879265    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:25.879265    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:25.923140    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:25.923140    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:26.006825    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:25.994746   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.996044   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.997646   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.998827   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:26.000023   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:25.994746   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.996044   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.997646   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:25.998827   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:26.000023   12097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:26.006825    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:26.006825    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:26.036172    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:26.036172    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:26.088180    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:26.088180    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:28.665087    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:28.689823    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:28.725678    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.725714    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:28.728663    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:28.759105    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.759146    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:28.763209    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:28.794743    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.794743    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:28.798927    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:28.832979    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.832979    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:28.836972    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:28.869676    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.869676    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:28.874394    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:28.909690    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.909690    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:28.914703    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:28.948685    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.948685    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:28.951687    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:28.983688    1436 logs.go:282] 0 containers: []
	W1210 07:33:28.983688    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:28.983688    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:28.983688    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:29.038702    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:29.038702    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:29.102687    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:29.102687    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:29.157695    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:29.157695    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:29.254070    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:29.238189   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.239080   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.244117   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.246991   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.248197   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:29.238189   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.239080   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.244117   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.246991   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:29.248197   12280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:29.254070    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:29.254070    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:31.790873    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:31.815324    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:31.848719    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.848719    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:31.853126    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:31.894569    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.894618    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:31.901660    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:31.945924    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.945924    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:31.949930    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:31.980922    1436 logs.go:282] 0 containers: []
	W1210 07:33:31.980922    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:31.983920    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:32.015920    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.015920    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:32.018924    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:32.055014    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.055014    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:32.059907    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:32.088299    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.088299    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:32.091301    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:32.122373    1436 logs.go:282] 0 containers: []
	W1210 07:33:32.122373    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:32.122373    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:32.122373    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:32.200241    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:32.200241    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:32.235857    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:32.236857    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:32.346052    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:32.333533   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.334563   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.336148   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.337055   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.339822   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:32.333533   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.334563   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.336148   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.337055   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:32.339822   12429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:32.346052    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:32.346052    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:32.374360    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:32.374360    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:33:33.924414    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:34.931799    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:34.953865    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:34.983147    1436 logs.go:282] 0 containers: []
	W1210 07:33:34.983147    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:34.986833    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:35.017888    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.017888    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:35.021662    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:35.051231    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.051231    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:35.055612    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:35.089316    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.089316    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:35.093193    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:35.121682    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.121682    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:35.126091    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:35.158874    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.158874    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:35.165874    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:35.201117    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.201117    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:35.206353    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:35.236228    1436 logs.go:282] 0 containers: []
	W1210 07:33:35.236228    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:35.236228    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:35.236228    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:35.267932    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:35.267994    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:35.320951    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:35.320951    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:35.383537    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:35.383589    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:35.425468    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:35.425468    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:35.528144    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:35.516901   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.517862   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.520128   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.521034   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.522130   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:35.516901   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.517862   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.520128   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.521034   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:35.522130   12612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:38.032492    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:38.054909    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:38.083957    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.083957    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:38.087695    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:38.116008    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.116008    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:38.121353    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:38.151236    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.151236    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:38.157561    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:38.191692    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.191739    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:38.195638    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:38.232952    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.232952    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:38.240283    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:38.267392    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.267392    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:38.270392    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:38.302982    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.302982    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:38.306527    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:38.337370    1436 logs.go:282] 0 containers: []
	W1210 07:33:38.337370    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:38.337663    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:38.337663    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:38.378149    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:38.378149    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:38.496679    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:38.485115   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.486129   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.488286   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.489938   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.491114   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:38.485115   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.486129   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.488286   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.489938   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:38.491114   12759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:38.496679    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:38.496679    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:38.523508    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:38.524031    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:38.575827    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:38.575926    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:41.142591    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:41.169193    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:41.202128    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.202197    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:41.205840    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:41.232108    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.232108    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:41.236042    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:41.266240    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.266240    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:41.270256    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:41.299391    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.299914    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:41.305198    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:41.334815    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.334888    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:41.338221    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:41.366830    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.366830    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:41.371846    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:41.403239    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.403307    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:41.406504    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:41.435444    1436 logs.go:282] 0 containers: []
	W1210 07:33:41.435507    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:41.435507    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:41.435507    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:41.495280    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:41.495280    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:41.540098    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:41.540098    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:41.631123    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:41.619480   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.620700   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.621495   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.624397   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.626223   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:41.619480   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.620700   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.621495   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.624397   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:41.626223   12928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:41.631123    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:41.631123    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:41.659481    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:41.660004    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:33:43.958857    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:44.218114    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:44.245684    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:44.277948    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.277948    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:44.281784    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:44.308191    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.308236    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:44.311628    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:44.338002    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.338064    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:44.341334    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:44.369051    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.369051    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:44.373446    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:44.401355    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.401355    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:44.404625    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:44.435928    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.436021    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:44.438720    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:44.468518    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.468518    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:44.472419    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:44.505185    1436 logs.go:282] 0 containers: []
	W1210 07:33:44.505185    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:44.505185    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:44.505185    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:44.542000    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:44.542000    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:44.637866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:44.628159   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.629590   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.630676   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.631884   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.633066   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:44.628159   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.629590   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.630676   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.631884   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:44.633066   13089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:44.637866    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:44.637866    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:44.668149    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:44.668149    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:44.722118    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:44.722118    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:47.287165    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:47.315701    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:47.348691    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.348691    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:47.352599    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:47.382757    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.382757    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:47.386956    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:47.416756    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.416756    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:47.420505    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:47.447567    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.447631    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:47.451327    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:47.481198    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.481198    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:47.484905    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:47.515752    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.515752    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:47.519521    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:47.549878    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.549878    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:47.553160    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:47.580738    1436 logs.go:282] 0 containers: []
	W1210 07:33:47.580738    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:47.580738    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:47.580738    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:47.620996    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:47.620996    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:47.717751    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:47.708186   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.709887   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.710982   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.712203   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.713847   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:47.708186   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.709887   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.710982   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.712203   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:47.713847   13264 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:47.717751    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:47.717751    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:47.747052    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:47.747052    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:47.806827    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:47.806907    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:50.374572    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:50.402608    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:50.434845    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.434845    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:50.439264    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:50.472884    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.472884    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:50.476675    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:50.506875    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.506875    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:50.510516    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:50.544104    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.544104    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:50.547823    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:50.582563    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.582563    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:50.586716    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:50.617520    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.617520    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:50.621651    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:50.654870    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.654924    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:50.658739    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:50.687650    1436 logs.go:282] 0 containers: []
	W1210 07:33:50.687650    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:50.687650    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:50.687650    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:50.741903    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:50.741970    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:50.801979    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:50.801979    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:50.841061    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:50.841061    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:50.929313    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:50.919053   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.920402   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.921292   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.924075   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.925956   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:50.919053   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.920402   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.921292   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.924075   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:50.925956   13455 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:50.929313    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:50.929313    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1210 07:33:53.996838    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:33:53.461932    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:53.489152    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:53.525676    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.525676    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:53.529484    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:53.564410    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.564438    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:53.567827    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:53.614175    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.614215    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:53.620260    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:53.655138    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.655138    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:53.659487    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:53.692591    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.692591    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:53.696809    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:53.736843    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.736843    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:53.741782    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:53.770910    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.770910    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:53.775145    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:53.805756    1436 logs.go:282] 0 containers: []
	W1210 07:33:53.805756    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:53.805756    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:53.805756    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:53.868923    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:53.868923    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:53.909599    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:53.909599    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:53.994728    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:53.985324   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.986526   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.987499   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.989286   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.991204   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:53.985324   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.986526   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.987499   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.989286   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:53.991204   13607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:53.994728    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:53.994728    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:54.023183    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:54.023245    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:56.581055    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:56.606311    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:56.640781    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.640781    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:56.645032    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:56.673780    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.673780    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:56.680498    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:56.708843    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.708843    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:56.711839    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:56.743689    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.743689    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:56.747149    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:56.776428    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.776490    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:56.780173    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:56.810171    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.810171    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:56.815860    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:56.843104    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.843150    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:56.846843    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:56.875180    1436 logs.go:282] 0 containers: []
	W1210 07:33:56.875180    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:56.875180    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:56.875260    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:33:56.937905    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:33:56.937905    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:33:56.978984    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:33:56.978984    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:33:57.072981    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:33:57.057332   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.058272   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.063163   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.064098   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.066478   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:33:57.057332   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.058272   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.063163   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.064098   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:33:57.066478   13767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:33:57.072981    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:33:57.072981    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:33:57.103275    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:33:57.103275    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:33:59.657150    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:33:59.680473    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:33:59.717538    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.717538    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:33:59.721115    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:33:59.750445    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.750445    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:33:59.754192    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:33:59.783080    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.783609    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:33:59.786966    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:33:59.815381    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.815381    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:33:59.818634    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:33:59.846978    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.847073    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:33:59.850723    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:33:59.881504    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.881531    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:33:59.885538    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:33:59.912091    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.912091    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:33:59.915555    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:33:59.945836    1436 logs.go:282] 0 containers: []
	W1210 07:33:59.945836    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:33:59.945836    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:33:59.945918    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:00.010932    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:00.010932    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:00.050450    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:00.050450    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:00.135132    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:00.122005   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.123035   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.124724   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.126083   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.127122   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:00.122005   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.123035   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.124724   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.126083   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:00.127122   13928 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:00.135132    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:00.135132    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:00.162951    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:00.162951    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:02.722322    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:02.747735    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:02.782353    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.782423    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:02.785942    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:02.815562    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.815562    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:02.819580    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:02.851940    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.851940    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:02.855858    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:02.883743    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.883743    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:02.887230    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:02.919540    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.919540    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:02.923123    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:02.951385    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.951439    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:02.955922    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:02.985112    1436 logs.go:282] 0 containers: []
	W1210 07:34:02.985172    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:02.988380    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:03.020559    1436 logs.go:282] 0 containers: []
	W1210 07:34:03.020590    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:03.020590    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:03.020643    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:03.113834    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:03.100874   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.101891   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.104988   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.106378   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.107577   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:03.100874   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.101891   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.104988   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.106378   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:03.107577   14085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:03.113834    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:03.113834    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:03.143434    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:03.143494    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:03.195505    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:03.195505    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:03.260582    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:03.260582    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1210 07:34:04.034666    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): Get "https://127.0.0.1:57440/api/v1/nodes/no-preload-099700": EOF
	I1210 07:34:05.805687    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:05.830820    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:05.867098    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.867098    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:05.870201    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:05.902724    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.902724    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:05.906452    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:05.937581    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.937660    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:05.941081    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:05.970812    1436 logs.go:282] 0 containers: []
	W1210 07:34:05.970812    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:05.974826    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:06.005319    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.005319    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:06.009298    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:06.036331    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.036367    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:06.040396    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:06.070470    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.070522    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:06.073716    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:06.105829    1436 logs.go:282] 0 containers: []
	W1210 07:34:06.105902    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:06.105902    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:06.105902    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:06.168761    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:06.168761    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:06.209503    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:06.209503    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:06.300233    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:06.287348   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.288220   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.291316   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.292659   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.293382   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:06.287348   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.288220   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.291316   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.292659   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:06.293382   14256 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:06.300233    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:06.300233    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:06.325856    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:06.326404    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:34:12.432519    6044 node_ready.go:55] error getting node "no-preload-099700" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1210 07:34:12.432519    6044 node_ready.go:38] duration metric: took 6m0.0003472s for node "no-preload-099700" to be "Ready" ...
	I1210 07:34:12.435520    6044 out.go:203] 
	W1210 07:34:12.437521    6044 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:34:12.437521    6044 out.go:285] * 
	W1210 07:34:12.439520    6044 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:34:12.443519    6044 out.go:203] 
	I1210 07:34:08.888339    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:08.915007    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:08.945370    1436 logs.go:282] 0 containers: []
	W1210 07:34:08.945370    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:08.948912    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:08.978717    1436 logs.go:282] 0 containers: []
	W1210 07:34:08.978744    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:08.982191    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:09.014137    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.014137    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:09.019817    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:09.049527    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.049527    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:09.053402    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:09.083494    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.083519    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:09.087029    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:09.115269    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.115306    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:09.117873    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:09.155291    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.155351    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:09.159388    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:09.189238    1436 logs.go:282] 0 containers: []
	W1210 07:34:09.189238    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:09.189238    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:09.189238    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:09.276866    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:09.264194   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.265301   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.267799   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269023   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269943   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:09.264194   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.265301   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.267799   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269023   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:09.269943   14415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:09.276924    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:09.276924    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:09.303083    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:09.303603    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:09.350941    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:09.350941    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:09.414406    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:09.414406    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:11.970539    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:11.997446    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:12.029543    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.029543    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:12.033746    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:12.061992    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.061992    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:12.066520    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:12.095801    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.095801    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:12.099364    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:12.129880    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.129949    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:12.133782    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:12.162555    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.162555    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:12.167228    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:12.196229    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.196229    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:12.200137    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:12.226729    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.226729    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:12.230279    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:12.255730    1436 logs.go:282] 0 containers: []
	W1210 07:34:12.255730    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:12.255730    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:12.255730    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:12.318642    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:12.318642    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:12.364065    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:12.364065    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:12.469524    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:12.459675   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.460966   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.462240   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.463300   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.464338   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:12.459675   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.460966   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.462240   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.463300   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:12.464338   14588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:12.469574    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:12.469574    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:12.496807    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:12.496950    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:15.052930    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:15.080623    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:15.117403    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.117403    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:15.120370    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:15.147363    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.148371    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:15.151363    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:15.180365    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.180365    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:15.183366    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:15.215366    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.215366    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:15.218364    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:15.247369    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.247369    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:15.251365    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:15.283373    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.283373    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:15.286369    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:15.314370    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.314370    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:15.317368    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:15.347380    1436 logs.go:282] 0 containers: []
	W1210 07:34:15.347380    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:15.347380    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:15.347380    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:15.421369    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:15.421369    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:15.458368    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:15.458368    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:15.566221    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:15.551230   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.552488   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.553348   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.556086   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.557771   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:15.551230   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.552488   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.553348   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.556086   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:15.557771   14760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:15.566279    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:15.566338    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:15.605803    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:15.605803    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:18.163754    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:18.197669    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:18.254543    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.254543    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:18.260541    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:18.293062    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.293062    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:18.296833    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:18.327885    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.327968    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:18.331280    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:18.368942    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.368942    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:18.372299    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:18.400463    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.400463    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:18.405006    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:18.446334    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.446379    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:18.449958    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:18.478295    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.478381    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:18.482123    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:18.510432    1436 logs.go:282] 0 containers: []
	W1210 07:34:18.510506    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:18.510548    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:18.510548    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:18.572862    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:18.572862    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:18.614127    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:18.614127    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:18.702730    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:18.692245   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.693386   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.694454   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.697285   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.699129   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:18.692245   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.693386   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.694454   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.697285   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:18.699129   14922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:18.702730    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:18.702730    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:18.729639    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:18.729639    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:21.289931    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:21.315099    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:21.349129    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.349129    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:21.352917    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:21.385897    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.386013    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:21.389207    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:21.439847    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.439847    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:21.444868    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:21.473011    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.473011    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:21.476938    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:21.503941    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.503983    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:21.507954    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:21.536377    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.536377    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:21.540123    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:21.571714    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.571714    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:21.575681    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:21.605581    1436 logs.go:282] 0 containers: []
	W1210 07:34:21.605581    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:21.605581    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:21.605581    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:21.633565    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:21.633565    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:21.687271    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:21.687271    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:21.750102    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:21.750102    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:21.792165    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:21.792165    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:21.885403    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:21.874829   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876021   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876953   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.879461   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.880406   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:21.874829   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876021   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.876953   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.879461   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:21.880406   15104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:24.393597    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:24.420363    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:24.450891    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.450891    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:24.454037    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:24.483407    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.483407    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:24.489862    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:24.517830    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.517830    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:24.521711    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:24.549403    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.549403    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:24.553551    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:24.580367    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.580367    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:24.584748    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:24.612646    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.612646    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:24.616710    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:24.647684    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.647753    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:24.651184    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:24.679053    1436 logs.go:282] 0 containers: []
	W1210 07:34:24.679053    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:24.679053    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:24.679053    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:24.768115    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:24.758247   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.759411   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.760423   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.761390   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.762221   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:24.758247   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.759411   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.760423   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.761390   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:24.762221   15246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:24.768115    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:24.768115    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:24.795167    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:24.795201    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:24.844459    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:24.844459    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:24.907171    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:24.907171    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:27.453205    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:27.478026    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:27.513249    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.513249    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:27.517125    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:27.547733    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.547733    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:27.551680    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:27.577736    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.577736    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:27.581469    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:27.612483    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.612483    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:27.616434    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:27.644895    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.644895    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:27.650606    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:27.678273    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.678273    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:27.681744    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:27.708604    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.708604    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:27.712244    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:27.742726    1436 logs.go:282] 0 containers: []
	W1210 07:34:27.742726    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:27.742726    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:27.742726    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:27.807570    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:27.807570    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:27.846722    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:27.846722    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:27.929641    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:27.919463   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.920475   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.921726   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.922614   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.924717   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:27.919463   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.920475   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.921726   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.922614   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:27.924717   15416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:27.929641    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:27.929641    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:27.956087    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:27.956087    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:30.506646    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:30.530148    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:30.563444    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.563444    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:30.567219    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:30.596843    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.596843    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:30.600803    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:30.628947    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.628947    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:30.632665    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:30.663325    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.663369    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:30.667341    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:30.695640    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.695640    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:30.699545    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:30.728310    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.728310    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:30.731899    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:30.758598    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.758598    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:30.763285    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:30.792051    1436 logs.go:282] 0 containers: []
	W1210 07:34:30.792051    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:30.792051    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:30.792051    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:30.830219    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:30.830219    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:30.919635    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:30.909299   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.910353   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.912393   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.914543   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.915506   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:30.909299   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.910353   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.912393   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.914543   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:30.915506   15578 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:30.919635    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:30.919635    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:30.949360    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:30.949360    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:30.997435    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:30.997435    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:33.565782    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:33.590543    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:33.623936    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.623936    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:33.629607    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:33.664589    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.664673    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:33.668215    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:33.698892    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.698892    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:33.702344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:33.733428    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.733428    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:33.737226    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:33.764873    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.764873    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:33.768422    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:33.800350    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.800350    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:33.804811    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:33.836711    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.836711    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:33.840164    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:33.869248    1436 logs.go:282] 0 containers: []
	W1210 07:34:33.869333    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:33.869333    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:33.869333    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:33.932626    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:33.933627    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:33.974227    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:33.974227    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:34.066031    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:34.054849   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.056230   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.057835   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.058730   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.060848   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:34.054849   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.056230   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.057835   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.058730   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:34.060848   15737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:34.066031    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:34.066031    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:34.092765    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:34.092765    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:36.652871    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:36.677531    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:36.712608    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.712608    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:36.718832    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:36.748298    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.748298    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:36.751762    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:36.783390    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.783403    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:36.787051    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:36.815730    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.815766    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:36.819100    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:36.848875    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.848875    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:36.852925    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:36.886657    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.886657    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:36.890808    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:36.920858    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.920858    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:36.924583    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:36.955882    1436 logs.go:282] 0 containers: []
	W1210 07:34:36.955960    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:36.956001    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:36.956001    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:37.021848    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:37.021848    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:37.060744    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:37.060744    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:37.154895    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:37.142691   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.143928   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.145557   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.147806   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.149984   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:37.142691   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.143928   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.145557   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.147806   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:37.149984   15905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:37.154895    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:37.154895    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:37.182385    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:37.182385    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:39.737032    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:39.762115    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:39.792900    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.792900    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:39.797014    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:39.825423    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.825455    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:39.829352    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:39.856679    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.856679    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:39.860615    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:39.891351    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.891351    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:39.895346    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:39.924351    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.924351    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:39.928531    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:39.956447    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.956447    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:39.961810    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:39.987792    1436 logs.go:282] 0 containers: []
	W1210 07:34:39.987792    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:39.991127    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:40.018614    1436 logs.go:282] 0 containers: []
	W1210 07:34:40.018614    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:40.018614    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:40.018614    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:40.082378    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:40.082378    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:40.123506    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:40.123506    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:40.208266    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:40.199944   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201027   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201868   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.204245   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.205189   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:40.199944   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201027   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.201868   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.204245   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:40.205189   16085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:40.209272    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:40.209272    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:40.239017    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:40.239017    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:42.793527    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:42.818084    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:42.852095    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.852095    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:42.855685    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:42.883269    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.883269    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:42.887287    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:42.918719    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.918800    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:42.923828    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:42.950663    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.950663    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:42.956319    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:42.985991    1436 logs.go:282] 0 containers: []
	W1210 07:34:42.985991    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:42.989729    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:43.017767    1436 logs.go:282] 0 containers: []
	W1210 07:34:43.017824    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:43.021689    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:43.048180    1436 logs.go:282] 0 containers: []
	W1210 07:34:43.048180    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:43.052257    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:43.081092    1436 logs.go:282] 0 containers: []
	W1210 07:34:43.081160    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:43.081183    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:43.081217    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:43.174944    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:43.162932   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.166268   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.169191   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.170321   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.171500   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:43.162932   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.166268   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.169191   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.170321   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:43.171500   16243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:43.174992    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:43.174992    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:43.202288    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:43.202807    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:43.249217    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:43.249217    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:43.311267    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:43.311267    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:45.857003    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:45.881743    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:45.911856    1436 logs.go:282] 0 containers: []
	W1210 07:34:45.911856    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:45.915335    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:45.945613    1436 logs.go:282] 0 containers: []
	W1210 07:34:45.945613    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:45.949134    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:45.977768    1436 logs.go:282] 0 containers: []
	W1210 07:34:45.977768    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:45.982182    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:46.010859    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.010859    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:46.014603    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:46.043489    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.043531    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:46.047198    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:46.080651    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.080685    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:46.084319    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:46.116705    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.116780    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:46.121508    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:46.154299    1436 logs.go:282] 0 containers: []
	W1210 07:34:46.154299    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:46.154299    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:46.154299    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:46.222546    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:46.222546    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:46.262468    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:46.262468    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:46.349894    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:46.340418   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.341659   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.342932   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.344391   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.345361   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:46.340418   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.341659   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.342932   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.344391   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:46.345361   16416 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:46.349894    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:46.349894    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:46.376804    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:46.376804    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:48.931982    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:48.957769    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:48.990182    1436 logs.go:282] 0 containers: []
	W1210 07:34:48.990182    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:48.994255    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:49.021913    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.021913    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:49.026344    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:49.054704    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.054704    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:49.058471    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:49.089507    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.089559    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:49.093804    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:49.121462    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.121462    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:49.125755    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:49.156174    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.156174    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:49.160707    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:49.190933    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.190933    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:49.194771    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:49.220610    1436 logs.go:282] 0 containers: []
	W1210 07:34:49.220610    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:49.220610    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:49.220610    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:49.283897    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:49.283897    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:49.324154    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:49.324154    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:49.412165    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:49.404459   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.405604   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.407007   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.408149   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.409161   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:49.404459   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.405604   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.407007   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.408149   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:49.409161   16579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:49.412165    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:49.413146    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:49.440045    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:49.440045    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:52.013495    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:52.044149    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:52.080205    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.080205    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:52.084762    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:52.115105    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.115105    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:52.119720    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:52.149672    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.149672    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:52.153985    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:52.186711    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.186711    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:52.192181    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:52.217751    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.217751    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:52.221590    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:52.250827    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.250876    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:52.254668    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:52.284643    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.284643    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:52.288811    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:52.316628    1436 logs.go:282] 0 containers: []
	W1210 07:34:52.316707    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:52.316707    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:52.316707    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:52.348325    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:52.348325    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:52.408110    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:52.408110    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:52.471268    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:52.471268    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:52.511512    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:52.511512    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:52.594976    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:52.587009   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.588398   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.589811   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.591970   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.593048   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:52.587009   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.588398   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.589811   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.591970   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:52.593048   16766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:55.100294    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:55.126530    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:55.160945    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.160945    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:55.164755    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:55.196407    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.196407    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:55.199994    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:55.229174    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.229174    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:55.232898    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:55.265856    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.265856    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:55.268892    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:55.302098    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.302121    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:55.305590    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:55.335754    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.335754    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:55.339583    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:55.368170    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.368251    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:55.372008    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:55.397576    1436 logs.go:282] 0 containers: []
	W1210 07:34:55.397576    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:55.397576    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:55.397576    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:55.434345    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:55.434345    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:55.528958    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:55.516781   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.517755   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.519593   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.520640   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.521612   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:55.516781   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.517755   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.519593   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.520640   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:55.521612   16914 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:55.528958    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:55.528958    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:55.555805    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:55.555805    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:55.602232    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:55.602232    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:34:58.169858    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:34:58.195497    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:34:58.226557    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.226588    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:34:58.229677    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:34:58.260817    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.260817    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:34:58.265378    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:34:58.293848    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.293920    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:34:58.297406    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:34:58.326737    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.326737    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:34:58.330307    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:34:58.357319    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.357407    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:34:58.360727    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:34:58.392361    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.392405    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:34:58.395697    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:34:58.425728    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.425807    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:34:58.429369    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:34:58.457816    1436 logs.go:282] 0 containers: []
	W1210 07:34:58.457866    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:34:58.457866    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:34:58.457866    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:34:58.495777    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:34:58.495777    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:34:58.585489    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:34:58.573271   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.574154   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.576361   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.577165   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.579860   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:34:58.573271   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.574154   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.576361   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.577165   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:34:58.579860   17081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:34:58.585489    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:34:58.585489    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:34:58.613007    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:34:58.613007    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:34:58.661382    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:34:58.661382    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:01.230900    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:01.255356    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:01.292137    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.292190    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:01.297192    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:01.328372    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.328372    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:01.332239    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:01.360635    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.360635    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:01.364529    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:01.391175    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.391175    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:01.394754    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:01.423093    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.423093    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:01.427022    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:01.454965    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.454965    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:01.459137    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:01.487734    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.487734    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:01.492051    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:01.518150    1436 logs.go:282] 0 containers: []
	W1210 07:35:01.518150    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:01.518150    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:01.518150    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:01.580940    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:01.580940    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:01.620363    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:01.620363    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:01.710696    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:01.700163   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.701113   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.703089   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.704462   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.705476   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:01.700163   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.701113   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.703089   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.704462   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:01.705476   17245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:01.710696    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:01.710696    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:01.736867    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:01.736867    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:04.295439    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:04.322348    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:04.356895    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.356919    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:04.361858    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:04.396943    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.397019    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:04.401065    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:04.431929    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.431929    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:04.436798    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:04.468073    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.468073    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:04.472528    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:04.503230    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.503230    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:04.506632    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:04.540016    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.540016    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:04.543627    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:04.576446    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.576446    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:04.583292    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:04.611475    1436 logs.go:282] 0 containers: []
	W1210 07:35:04.611542    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:04.611542    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:04.611542    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:04.640376    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:04.640433    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:04.695309    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:04.695309    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:04.756418    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:04.756418    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:04.795089    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:04.795089    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:04.891481    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:04.878108   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.880090   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.883096   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.885167   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.886541   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:04.878108   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.880090   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.883096   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.885167   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:04.886541   17422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:07.396688    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:07.422837    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:07.454807    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.454807    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:07.459071    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:07.489720    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.489720    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:07.493466    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:07.519982    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.519982    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:07.523858    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:07.552985    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.552985    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:07.556972    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:07.589709    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.589709    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:07.593709    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:07.621519    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.621519    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:07.625151    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:07.654324    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.654404    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:07.657279    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:07.690913    1436 logs.go:282] 0 containers: []
	W1210 07:35:07.690966    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:07.690988    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:07.690988    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:07.757157    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:07.757157    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:07.796333    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:07.796333    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:07.893954    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:07.881331   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.882766   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.885657   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887077   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887623   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:07.881331   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.882766   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.885657   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887077   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:07.887623   17569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:07.893954    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:07.893954    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:07.943452    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:07.943452    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:10.496562    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:10.522517    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:10.555517    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.555517    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:10.560160    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:10.591257    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.591306    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:10.594925    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:10.623075    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.623075    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:10.626725    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:10.654115    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.654115    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:10.658014    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:10.689683    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.689683    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:10.693386    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:10.721754    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.721754    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:10.725087    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:10.753052    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.753052    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:10.756926    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:10.787466    1436 logs.go:282] 0 containers: []
	W1210 07:35:10.787466    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:10.787466    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:10.787466    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:10.882563    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:10.873740   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.874902   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.876114   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.877091   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.878349   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:10.873740   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.874902   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.876114   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.877091   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:10.878349   17724 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:10.882563    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:10.882563    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:10.944299    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:10.944299    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:10.993835    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:10.993835    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:11.053114    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:11.053114    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:13.597304    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:13.621417    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:13.653723    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.653842    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:13.657020    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:13.690175    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.690175    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:13.693954    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:13.723350    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.723350    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:13.728514    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:13.757179    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.757179    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:13.765645    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:13.794387    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.794473    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:13.798130    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:13.826937    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.826937    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:13.830895    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:13.865171    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.865171    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:13.869540    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:13.899920    1436 logs.go:282] 0 containers: []
	W1210 07:35:13.899920    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:13.899920    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:13.899920    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:13.964338    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:13.964338    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:14.028584    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:14.028584    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:14.067840    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:14.067840    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:14.154123    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:14.144490   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.145615   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.146725   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.148037   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.149069   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:14.144490   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.145615   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.146725   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.148037   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:14.149069   17925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:14.154123    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:14.154123    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:16.685726    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:16.716822    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:16.753764    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.753827    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:16.757211    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:16.789634    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.789634    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:16.793640    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:16.822677    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.822728    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:16.826522    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:16.853660    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.853660    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:16.858461    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:16.887452    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.887504    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:16.893014    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:16.939344    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.939344    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:16.943118    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:16.971703    1436 logs.go:282] 0 containers: []
	W1210 07:35:16.971781    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:16.974884    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:17.003517    1436 logs.go:282] 0 containers: []
	W1210 07:35:17.003595    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:17.003595    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:17.003595    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:17.088355    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:17.079526   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.080729   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.081812   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.083165   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.084419   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:17.079526   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.080729   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.081812   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.083165   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:17.084419   18064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:17.088355    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:17.088355    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:17.117181    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:17.117241    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:17.168070    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:17.168155    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:17.231584    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:17.231584    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:19.776112    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:19.801640    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:19.835886    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.835886    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:19.839626    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:19.872127    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.872127    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:19.876526    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:19.929339    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.929339    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:19.933522    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:19.962400    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.962400    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:19.966133    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:19.994468    1436 logs.go:282] 0 containers: []
	W1210 07:35:19.994544    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:19.998645    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:20.027252    1436 logs.go:282] 0 containers: []
	W1210 07:35:20.027252    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:20.032575    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:20.060153    1436 logs.go:282] 0 containers: []
	W1210 07:35:20.060153    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:20.065171    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:20.091891    1436 logs.go:282] 0 containers: []
	W1210 07:35:20.091891    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:20.091891    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:20.091891    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:20.131103    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:20.131103    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:20.218614    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:20.208033   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.209212   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.210215   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214139   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214965   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:20.208033   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.209212   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.210215   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214139   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:20.214965   18233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:20.218614    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:20.219146    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:20.245788    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:20.245788    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:20.298111    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:20.298207    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:22.861878    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:22.887649    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:22.922573    1436 logs.go:282] 0 containers: []
	W1210 07:35:22.922573    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:22.926179    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:22.959170    1436 logs.go:282] 0 containers: []
	W1210 07:35:22.959197    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:22.963338    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:22.994510    1436 logs.go:282] 0 containers: []
	W1210 07:35:22.994566    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:22.997861    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:23.029960    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.030036    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:23.033513    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:23.064625    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.064625    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:23.069769    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:23.101906    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.101943    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:23.105651    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:23.136615    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.136615    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:23.140616    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:23.170857    1436 logs.go:282] 0 containers: []
	W1210 07:35:23.170942    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:23.170942    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:23.170942    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:23.233098    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:23.233098    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:23.273238    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:23.273238    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:23.361638    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:23.352696   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.354050   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.356707   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.357782   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.358807   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:23.352696   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.354050   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.356707   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.357782   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:23.358807   18398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:23.361638    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:23.361638    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:23.390711    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:23.391230    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:25.949809    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:25.975470    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:26.007496    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.007496    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:26.011469    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:26.044617    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.044617    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:26.048311    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:26.078756    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.078783    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:26.082359    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:26.112113    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.112183    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:26.115713    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:26.148097    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.148097    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:26.151926    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:26.182729    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.182753    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:26.186743    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:26.217219    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.217219    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:26.223773    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:26.251643    1436 logs.go:282] 0 containers: []
	W1210 07:35:26.251713    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:26.251713    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:26.251713    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:26.278698    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:26.278698    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:26.332014    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:26.332014    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:26.394304    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:26.394304    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:26.433073    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:26.433073    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:26.519395    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:26.506069   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.507354   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.509591   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.512516   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.514125   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:26.506069   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.507354   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.509591   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.512516   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:26.514125   18575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:29.024398    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:29.049372    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:29.084989    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.085019    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:29.089078    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:29.116420    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.116420    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:29.120531    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:29.149880    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.149880    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:29.153505    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:29.181726    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.181790    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:29.185295    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:29.216713    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.216713    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:29.222568    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:29.249487    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.249487    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:29.253512    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:29.283473    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.283497    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:29.287061    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:29.313225    1436 logs.go:282] 0 containers: []
	W1210 07:35:29.313225    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:29.313225    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:29.313225    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:29.399665    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:29.386954   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.388181   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.390621   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.391811   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.393167   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:29.386954   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.388181   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.390621   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.391811   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:29.393167   18717 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:29.399665    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:29.399665    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:29.428593    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:29.428593    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:29.477815    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:29.477877    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:29.541874    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:29.541874    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:32.087876    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:32.113456    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:32.145773    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.145805    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:32.149787    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:32.178912    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.178987    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:32.182700    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:32.213301    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.213301    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:32.217129    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:32.246756    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.246824    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:32.250299    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:32.278791    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.278835    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:32.282397    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:32.316208    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.316278    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:32.320233    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:32.349155    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.349155    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:32.352807    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:32.386875    1436 logs.go:282] 0 containers: []
	W1210 07:35:32.386875    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:32.386944    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:32.386944    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:32.479781    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:32.469750   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.470693   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.473307   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.474321   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.475302   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:32.469750   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.470693   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.473307   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.474321   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:32.475302   18879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:32.479781    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:32.479781    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:32.506994    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:32.506994    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:32.561757    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:32.561757    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:32.624545    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:32.624545    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:35.176040    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:35.201056    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:35.235735    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.235735    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:35.239655    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:35.267349    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.267416    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:35.270515    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:35.303264    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.303264    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:35.306371    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:35.339037    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.339263    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:35.343297    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:35.375639    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.375639    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:35.379647    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:35.407670    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.407670    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:35.411506    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:35.446240    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.446240    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:35.450265    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:35.477814    1436 logs.go:282] 0 containers: []
	W1210 07:35:35.477814    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:35.477814    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:35.477814    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:35.541174    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:35.541174    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:35.581633    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:35.581633    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:35.673254    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:35.664165   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.665345   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.666190   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.668510   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.669467   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:35.664165   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.665345   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.666190   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.668510   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:35.669467   19055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:35.673254    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:35.673254    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:35.701200    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:35.701200    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:38.255869    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:38.281759    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:38.316123    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.316123    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:38.319358    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:38.348903    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.348943    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:38.352900    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:38.381759    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.381795    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:38.385361    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:38.414524    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.414586    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:38.417710    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:38.447131    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.447205    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:38.451100    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:38.479508    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.479543    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:38.483003    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:38.512848    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.512848    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:38.516967    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:38.547680    1436 logs.go:282] 0 containers: []
	W1210 07:35:38.547680    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:38.547680    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:38.547680    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:38.614038    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:38.614038    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:38.658448    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:38.658448    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:38.743054    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:38.733038   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.734073   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.735791   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.738099   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.739595   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:38.733038   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.734073   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.735791   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.738099   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:38.739595   19218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:38.743054    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:38.743054    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:38.775152    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:38.775214    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:41.333835    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:41.358081    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1210 07:35:41.393471    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.393471    1436 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:35:41.396774    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1210 07:35:41.425173    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.425224    1436 logs.go:284] No container was found matching "etcd"
	I1210 07:35:41.428523    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1210 07:35:41.456663    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.456663    1436 logs.go:284] No container was found matching "coredns"
	I1210 07:35:41.459654    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1210 07:35:41.490212    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.490212    1436 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:35:41.493250    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1210 07:35:41.523505    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.523505    1436 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:35:41.527006    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1210 07:35:41.555529    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.555529    1436 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:35:41.559605    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1210 07:35:41.590913    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.591011    1436 logs.go:284] No container was found matching "kindnet"
	I1210 07:35:41.596392    1436 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1210 07:35:41.627361    1436 logs.go:282] 0 containers: []
	W1210 07:35:41.627421    1436 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 07:35:41.627441    1436 logs.go:123] Gathering logs for kubelet ...
	I1210 07:35:41.627538    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:35:41.692948    1436 logs.go:123] Gathering logs for dmesg ...
	I1210 07:35:41.692948    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:35:41.731909    1436 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:35:41.731909    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:35:41.816121    1436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:35:41.806508   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.807705   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.808985   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.810306   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.811462   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:35:41.806508   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.807705   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.808985   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.810306   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:35:41.811462   19377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:35:41.816121    1436 logs.go:123] Gathering logs for Docker ...
	I1210 07:35:41.816121    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1210 07:35:41.844622    1436 logs.go:123] Gathering logs for container status ...
	I1210 07:35:41.844622    1436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:35:44.401865    1436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:35:44.426294    1436 out.go:203] 
	W1210 07:35:44.428631    1436 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1210 07:35:44.428631    1436 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1210 07:35:44.428631    1436 out.go:285] * Related issues:
	W1210 07:35:44.428631    1436 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1210 07:35:44.428631    1436 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1210 07:35:44.430629    1436 out.go:203] 
	
	
	==> Docker <==
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794207271Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794291179Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794301480Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794308081Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794314981Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794339784Z" level=info msg="Docker daemon" commit=de45c2a containerd-snapshotter=false storage-driver=overlay2 version=29.1.2
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.794382688Z" level=info msg="Initializing buildkit"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.916550520Z" level=info msg="Completed buildkit initialization"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.923562810Z" level=info msg="Daemon has completed initialization"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.923807334Z" level=info msg="API listen on /run/docker.sock"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.923950448Z" level=info msg="API listen on [::]:2376"
	Dec 10 07:28:08 no-preload-099700 dockerd[923]: time="2025-12-10T07:28:08.923820636Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 07:28:08 no-preload-099700 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 10 07:28:09 no-preload-099700 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Start docker client with request timeout 0s"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Loaded network plugin cni"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 10 07:28:09 no-preload-099700 cri-dockerd[1216]: time="2025-12-10T07:28:09Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 10 07:28:09 no-preload-099700 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:47:10.992382   21125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:47:10.993234   21125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:47:10.996335   21125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:47:10.997674   21125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1210 07:47:10.998939   21125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +7.347496] CPU: 6 PID: 490841 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fe73ddc4b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7fe73ddc4af6.
	[  +0.000000] RSP: 002b:00007ffc57a05a90 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000000] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.867258] CPU: 5 PID: 491006 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7f1a7acb4b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7f1a7acb4af6.
	[  +0.000001] RSP: 002b:00007ffe19029200 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[Dec10 07:32] tmpfs: Unknown parameter 'noswap'
	[ +15.541609] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 07:47:11 up  3:15,  0 user,  load average: 0.50, 0.70, 2.25
	Linux no-preload-099700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:47:07 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:47:08 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1512.
	Dec 10 07:47:08 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:47:08 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:47:08 no-preload-099700 kubelet[20951]: E1210 07:47:08.360137   20951 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:47:08 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:47:08 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:47:08 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1513.
	Dec 10 07:47:08 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:47:08 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:47:09 no-preload-099700 kubelet[20979]: E1210 07:47:09.079267   20979 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:47:09 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:47:09 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:47:09 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1514.
	Dec 10 07:47:09 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:47:09 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:47:09 no-preload-099700 kubelet[20998]: E1210 07:47:09.822267   20998 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:47:09 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:47:09 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:47:10 no-preload-099700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1515.
	Dec 10 07:47:10 no-preload-099700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:47:10 no-preload-099700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:47:10 no-preload-099700 kubelet[21103]: E1210 07:47:10.580925   21103 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:47:10 no-preload-099700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:47:10 no-preload-099700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-099700 -n no-preload-099700
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-099700 -n no-preload-099700: exit status 2 (606.9193ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-099700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (232.00s)

                                                
                                    

Test pass (358/427)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 9.34
4 TestDownloadOnly/v1.28.0/preload-exists 0.04
7 TestDownloadOnly/v1.28.0/kubectl 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.79
9 TestDownloadOnly/v1.28.0/DeleteAll 0.74
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.74
12 TestDownloadOnly/v1.34.3/json-events 7.24
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.62
18 TestDownloadOnly/v1.34.3/DeleteAll 1
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.46
21 TestDownloadOnly/v1.35.0-rc.1/json-events 14.56
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.2
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.7
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.54
30 TestBinaryMirror 4.29
31 TestOffline 147.91
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.23
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.22
36 TestAddons/Setup 299.68
38 TestAddons/serial/Volcano 50.27
40 TestAddons/serial/GCPAuth/Namespaces 0.26
41 TestAddons/serial/GCPAuth/FakeCredentials 11.16
45 TestAddons/parallel/RegistryCreds 1.61
47 TestAddons/parallel/InspektorGadget 12.35
48 TestAddons/parallel/MetricsServer 7.49
50 TestAddons/parallel/CSI 58.74
51 TestAddons/parallel/Headlamp 29.82
52 TestAddons/parallel/CloudSpanner 6.03
53 TestAddons/parallel/LocalPath 57.03
54 TestAddons/parallel/NvidiaDevicePlugin 7.88
55 TestAddons/parallel/Yakd 13.33
56 TestAddons/parallel/AmdGpuDevicePlugin 7.65
57 TestAddons/StoppedEnableDisable 12.88
58 TestCertOptions 80.71
59 TestCertExpiration 286.42
60 TestDockerFlags 82.76
61 TestForceSystemdFlag 59.76
62 TestForceSystemdEnv 78.89
68 TestErrorSpam/start 2.55
69 TestErrorSpam/status 2.14
70 TestErrorSpam/pause 2.72
71 TestErrorSpam/unpause 2.53
72 TestErrorSpam/stop 19.48
75 TestFunctional/serial/CopySyncFile 0.03
76 TestFunctional/serial/StartWithProxy 97.11
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 47.54
79 TestFunctional/serial/KubeContext 0.1
80 TestFunctional/serial/KubectlGetPods 0.28
83 TestFunctional/serial/CacheCmd/cache/add_remote 9.36
84 TestFunctional/serial/CacheCmd/cache/add_local 4.19
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.19
86 TestFunctional/serial/CacheCmd/cache/list 0.18
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.6
88 TestFunctional/serial/CacheCmd/cache/cache_reload 4.46
89 TestFunctional/serial/CacheCmd/cache/delete 0.37
90 TestFunctional/serial/MinikubeKubectlCmd 0.36
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.18
92 TestFunctional/serial/ExtraConfig 52.57
93 TestFunctional/serial/ComponentHealth 0.14
94 TestFunctional/serial/LogsCmd 1.76
95 TestFunctional/serial/LogsFileCmd 1.8
96 TestFunctional/serial/InvalidService 5.13
98 TestFunctional/parallel/ConfigCmd 1.23
100 TestFunctional/parallel/DryRun 1.63
101 TestFunctional/parallel/InternationalLanguage 0.66
102 TestFunctional/parallel/StatusCmd 1.85
107 TestFunctional/parallel/AddonsCmd 0.51
108 TestFunctional/parallel/PersistentVolumeClaim 24.79
110 TestFunctional/parallel/SSHCmd 1.27
111 TestFunctional/parallel/CpCmd 3.48
112 TestFunctional/parallel/MySQL 75.34
113 TestFunctional/parallel/FileSync 0.55
114 TestFunctional/parallel/CertSync 3.28
118 TestFunctional/parallel/NodeLabels 0.14
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
122 TestFunctional/parallel/License 1.38
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.88
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.4
128 TestFunctional/parallel/ServiceCmd/DeployApp 13.31
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.97
136 TestFunctional/parallel/ProfileCmd/profile_list 0.87
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.84
138 TestFunctional/parallel/ServiceCmd/List 0.87
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.88
140 TestFunctional/parallel/ServiceCmd/HTTPS 15.01
141 TestFunctional/parallel/Version/short 0.17
142 TestFunctional/parallel/Version/components 0.88
143 TestFunctional/parallel/ImageCommands/ImageListShort 0.47
144 TestFunctional/parallel/ImageCommands/ImageListTable 0.52
145 TestFunctional/parallel/ImageCommands/ImageListJson 0.45
146 TestFunctional/parallel/ImageCommands/ImageListYaml 0.55
147 TestFunctional/parallel/ImageCommands/ImageBuild 9.6
148 TestFunctional/parallel/ImageCommands/Setup 1.67
149 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.07
150 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.92
151 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.6
152 TestFunctional/parallel/DockerEnv/powershell 5.26
153 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.68
154 TestFunctional/parallel/ImageCommands/ImageRemove 0.98
155 TestFunctional/parallel/ServiceCmd/Format 15.01
156 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.09
157 TestFunctional/parallel/UpdateContextCmd/no_changes 0.32
158 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.32
159 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.3
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.89
161 TestFunctional/parallel/ServiceCmd/URL 15.01
162 TestFunctional/delete_echo-server_images 0.14
163 TestFunctional/delete_my-image_image 0.06
164 TestFunctional/delete_minikube_cached_images 0.06
168 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.1
176 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 9.7
177 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 3.62
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.18
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.18
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.58
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 4.48
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.36
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 1.24
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 1.38
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 1.06
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 1.43
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.74
200 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.43
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 1.07
204 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 3.15
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.58
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 3.28
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.56
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 2.19
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.89
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.82
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.83
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.3
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.34
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.32
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.17
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 1.89
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.49
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.46
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.44
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.52
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 5.38
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.81
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 3.45
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 2.82
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 3.55
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.69
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.9
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 1.2
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.83
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.14
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.06
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.07
260 TestMultiControlPlane/serial/StartCluster 304.51
261 TestMultiControlPlane/serial/DeployApp 10.15
262 TestMultiControlPlane/serial/PingHostFromPods 2.89
263 TestMultiControlPlane/serial/AddWorkerNode 52.76
264 TestMultiControlPlane/serial/NodeLabels 0.14
265 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.96
266 TestMultiControlPlane/serial/CopyFile 33.92
267 TestMultiControlPlane/serial/StopSecondaryNode 13.38
268 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.56
269 TestMultiControlPlane/serial/RestartSecondaryNode 104.75
270 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.98
271 TestMultiControlPlane/serial/RestartClusterKeepsNodes 179.73
272 TestMultiControlPlane/serial/DeleteSecondaryNode 14.64
273 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.48
274 TestMultiControlPlane/serial/StopCluster 37.32
275 TestMultiControlPlane/serial/RestartCluster 111.68
276 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.53
277 TestMultiControlPlane/serial/AddSecondaryNode 107.56
278 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.99
281 TestImageBuild/serial/Setup 60.86
282 TestImageBuild/serial/NormalBuild 3.77
283 TestImageBuild/serial/BuildWithBuildArg 2.53
284 TestImageBuild/serial/BuildWithDockerIgnore 1.24
285 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.25
290 TestJSONOutput/start/Command 91.9
291 TestJSONOutput/start/Audit 0
293 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/pause/Command 1.19
297 TestJSONOutput/pause/Audit 0
299 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/unpause/Command 0.94
303 TestJSONOutput/unpause/Audit 0
305 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
308 TestJSONOutput/stop/Command 12.18
309 TestJSONOutput/stop/Audit 0
311 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
312 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
313 TestErrorJSONOutput 0.68
315 TestKicCustomNetwork/create_custom_network 66.68
316 TestKicCustomNetwork/use_default_bridge_network 65.59
317 TestKicExistingNetwork 68.54
318 TestKicCustomSubnet 69.55
319 TestKicStaticIP 68.81
320 TestMainNoArgs 0.16
321 TestMinikubeProfile 129.93
324 TestMountStart/serial/StartWithMountFirst 14.14
325 TestMountStart/serial/VerifyMountFirst 0.55
326 TestMountStart/serial/StartWithMountSecond 13.83
327 TestMountStart/serial/VerifyMountSecond 0.54
328 TestMountStart/serial/DeleteFirst 2.45
329 TestMountStart/serial/VerifyMountPostDelete 0.54
330 TestMountStart/serial/Stop 1.87
331 TestMountStart/serial/RestartStopped 10.79
332 TestMountStart/serial/VerifyMountPostStop 0.55
335 TestMultiNode/serial/FreshStart2Nodes 139.79
336 TestMultiNode/serial/DeployApp2Nodes 7.24
337 TestMultiNode/serial/PingHostFrom2Pods 1.78
338 TestMultiNode/serial/AddNode 51.63
339 TestMultiNode/serial/MultiNodeLabels 0.13
340 TestMultiNode/serial/ProfileList 1.39
341 TestMultiNode/serial/CopyFile 19.52
342 TestMultiNode/serial/StopNode 3.85
343 TestMultiNode/serial/StartAfterStop 14.99
344 TestMultiNode/serial/RestartKeepsNodes 83.68
345 TestMultiNode/serial/DeleteNode 7.44
346 TestMultiNode/serial/StopMultiNode 24.11
347 TestMultiNode/serial/RestartMultiNode 60.23
348 TestMultiNode/serial/ValidateNameConflict 62.79
352 TestPreload 134.96
353 TestScheduledStopWindows 125.41
357 TestInsufficientStorage 15.73
358 TestRunningBinaryUpgrade 380.13
361 TestMissingContainerUpgrade 254.17
363 TestStoppedBinaryUpgrade/Setup 1.52
364 TestNoKubernetes/serial/StartNoK8sWithVersion 0.24
365 TestNoKubernetes/serial/StartWithK8s 101.99
366 TestStoppedBinaryUpgrade/Upgrade 456.93
367 TestNoKubernetes/serial/StartWithStopK8s 35.6
368 TestNoKubernetes/serial/Start 24.95
369 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
370 TestNoKubernetes/serial/VerifyK8sNotRunning 0.54
371 TestNoKubernetes/serial/ProfileList 10.22
372 TestNoKubernetes/serial/Stop 1.94
373 TestNoKubernetes/serial/StartNoArgs 10.25
374 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.56
383 TestPause/serial/Start 90.85
384 TestPause/serial/SecondStartNoReconfiguration 47.35
385 TestPause/serial/Pause 1.04
386 TestPause/serial/VerifyStatus 0.62
387 TestPause/serial/Unpause 0.89
388 TestPause/serial/PauseAgain 1.18
389 TestPause/serial/DeletePaused 3.74
390 TestPause/serial/VerifyDeletedResources 1.67
391 TestStoppedBinaryUpgrade/MinikubeLogs 1.59
404 TestStartStop/group/old-k8s-version/serial/FirstStart 79.44
406 TestStartStop/group/embed-certs/serial/FirstStart 82.18
407 TestStartStop/group/old-k8s-version/serial/DeployApp 9.65
408 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.78
409 TestStartStop/group/old-k8s-version/serial/Stop 12.32
410 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.55
411 TestStartStop/group/old-k8s-version/serial/SecondStart 36.22
412 TestStartStop/group/embed-certs/serial/DeployApp 10.7
413 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.71
414 TestStartStop/group/embed-certs/serial/Stop 12.22
415 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.51
416 TestStartStop/group/embed-certs/serial/SecondStart 65.76
419 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 22.01
420 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.36
421 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.57
422 TestStartStop/group/old-k8s-version/serial/Pause 5.24
424 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 116.23
425 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
426 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.35
427 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.56
428 TestStartStop/group/embed-certs/serial/Pause 5.21
431 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.64
432 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.48
433 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.2
434 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.51
435 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 53.01
436 TestNetworkPlugins/group/auto/Start 99.65
437 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
438 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.53
439 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.49
440 TestStartStop/group/default-k8s-diff-port/serial/Pause 5
441 TestNetworkPlugins/group/kindnet/Start 90.55
442 TestNetworkPlugins/group/auto/KubeletFlags 0.58
443 TestNetworkPlugins/group/auto/NetCatPod 15.52
444 TestNetworkPlugins/group/auto/DNS 0.27
445 TestNetworkPlugins/group/auto/Localhost 0.21
446 TestNetworkPlugins/group/auto/HairPin 0.25
447 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
448 TestNetworkPlugins/group/kindnet/KubeletFlags 0.79
449 TestNetworkPlugins/group/kindnet/NetCatPod 17.47
450 TestNetworkPlugins/group/flannel/Start 93.27
451 TestNetworkPlugins/group/kindnet/DNS 0.24
452 TestNetworkPlugins/group/kindnet/Localhost 0.2
453 TestNetworkPlugins/group/kindnet/HairPin 0.21
454 TestNetworkPlugins/group/enable-default-cni/Start 102.6
455 TestNetworkPlugins/group/flannel/ControllerPod 6.01
456 TestNetworkPlugins/group/flannel/KubeletFlags 0.59
457 TestNetworkPlugins/group/flannel/NetCatPod 17.47
458 TestNetworkPlugins/group/flannel/DNS 0.25
459 TestNetworkPlugins/group/flannel/Localhost 0.21
460 TestNetworkPlugins/group/flannel/HairPin 0.2
461 TestNetworkPlugins/group/bridge/Start 99.36
462 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.58
463 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.43
465 TestNetworkPlugins/group/enable-default-cni/DNS 0.27
466 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
467 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
469 TestNetworkPlugins/group/kubenet/Start 99.87
470 TestNetworkPlugins/group/bridge/KubeletFlags 0.55
471 TestNetworkPlugins/group/bridge/NetCatPod 19.54
472 TestStartStop/group/newest-cni/serial/DeployApp 0
474 TestNetworkPlugins/group/bridge/DNS 0.24
475 TestNetworkPlugins/group/bridge/Localhost 0.2
476 TestNetworkPlugins/group/bridge/HairPin 0.21
477 TestStartStop/group/no-preload/serial/Stop 1.93
478 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.56
480 TestNetworkPlugins/group/calico/Start 125.72
481 TestNetworkPlugins/group/kubenet/KubeletFlags 1.03
482 TestNetworkPlugins/group/kubenet/NetCatPod 17.47
483 TestNetworkPlugins/group/kubenet/DNS 0.24
484 TestNetworkPlugins/group/kubenet/Localhost 0.21
485 TestNetworkPlugins/group/kubenet/HairPin 0.21
486 TestNetworkPlugins/group/false/Start 94.09
487 TestStartStop/group/newest-cni/serial/Stop 3.59
488 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.57
490 TestNetworkPlugins/group/calico/ControllerPod 6.01
491 TestNetworkPlugins/group/calico/KubeletFlags 0.57
492 TestNetworkPlugins/group/calico/NetCatPod 14.51
493 TestNetworkPlugins/group/calico/DNS 0.25
494 TestNetworkPlugins/group/calico/Localhost 0.26
495 TestNetworkPlugins/group/calico/HairPin 0.21
496 TestNetworkPlugins/group/false/KubeletFlags 0.53
497 TestNetworkPlugins/group/false/NetCatPod 15.51
498 TestNetworkPlugins/group/false/DNS 0.23
499 TestNetworkPlugins/group/false/Localhost 0.21
500 TestNetworkPlugins/group/false/HairPin 0.23
501 TestNetworkPlugins/group/custom-flannel/Start 83.3
502 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.56
503 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.62
504 TestNetworkPlugins/group/custom-flannel/DNS 0.23
505 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
506 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
508 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
509 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
510 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.48
x
+
TestDownloadOnly/v1.28.0/json-events (9.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-114300 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-114300 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker: (9.3400044s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (9.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1210 05:29:07.785673   11304 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1210 05:29:07.829042   11304 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
--- PASS: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-114300
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-114300: exit status 85 (790.8691ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-114300 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker │ download-only-114300 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:28:58
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:28:58.515254   11964 out.go:360] Setting OutFile to fd 696 ...
	I1210 05:28:58.556718   11964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:28:58.556764   11964 out.go:374] Setting ErrFile to fd 700...
	I1210 05:28:58.556764   11964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1210 05:28:58.569873   11964 root.go:314] Error reading config file at C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I1210 05:28:58.579088   11964 out.go:368] Setting JSON to true
	I1210 05:28:58.581321   11964 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3470,"bootTime":1765341068,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 05:28:58.582319   11964 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 05:28:58.588120   11964 out.go:99] [download-only-114300] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 05:28:58.588120   11964 notify.go:221] Checking for updates...
	W1210 05:28:58.588120   11964 preload.go:354] Failed to list preload files: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I1210 05:28:58.591103   11964 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:28:58.593458   11964 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 05:28:58.595489   11964 out.go:171] MINIKUBE_LOCATION=22094
	I1210 05:28:58.597987   11964 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1210 05:28:58.601763   11964 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 05:28:58.602740   11964 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:28:58.809986   11964 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 05:28:58.813486   11964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:28:59.469619   11964 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-10 05:28:59.446262142 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:28:59.473413   11964 out.go:99] Using the docker driver based on user configuration
	I1210 05:28:59.473933   11964 start.go:309] selected driver: docker
	I1210 05:28:59.473933   11964 start.go:927] validating driver "docker" against <nil>
	I1210 05:28:59.480911   11964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:28:59.733736   11964 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-10 05:28:59.710258175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:28:59.734203   11964 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:28:59.788210   11964 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1210 05:28:59.788916   11964 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 05:28:59.793148   11964 out.go:171] Using Docker Desktop driver with root privileges
	I1210 05:28:59.795165   11964 cni.go:84] Creating CNI manager for ""
	I1210 05:28:59.795165   11964 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 05:28:59.795165   11964 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 05:28:59.795165   11964 start.go:353] cluster config:
	{Name:download-only-114300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-114300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:28:59.797279   11964 out.go:99] Starting "download-only-114300" primary control-plane node in "download-only-114300" cluster
	I1210 05:28:59.797279   11964 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 05:28:59.799882   11964 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:28:59.799882   11964 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1210 05:28:59.800315   11964 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:28:59.833335   11964 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1210 05:28:59.833452   11964 cache.go:65] Caching tarball of preloaded images
	I1210 05:28:59.833951   11964 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1210 05:28:59.836717   11964 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1210 05:28:59.836743   11964 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1210 05:28:59.855458   11964 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1210 05:28:59.855458   11964 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765275396-22083@sha256_ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar
	I1210 05:28:59.855458   11964 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765275396-22083@sha256_ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar
	I1210 05:28:59.855458   11964 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1210 05:28:59.855458   11964 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1210 05:28:59.905618   11964 preload.go:295] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1210 05:28:59.906198   11964 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1210 05:29:03.516343   11964 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I1210 05:29:03.516892   11964 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-114300\config.json ...
	I1210 05:29:03.517480   11964 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-114300\config.json: {Name:mk3e3acc766fe38b3f58018c57574adaa558e1ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:03.518114   11964 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1210 05:29:03.520125   11964 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v1.28.0/kubectl.exe
	
	
	* The control-plane node download-only-114300 host does not exist
	  To start a cluster, run: "minikube start -p download-only-114300"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-114300
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (7.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-321200 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-321200 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=docker --driver=docker: (7.2383587s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (7.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
--- PASS: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
--- PASS: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
--- PASS: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-321200
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-321200: exit status 85 (614.2369ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-114300 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker │ download-only-114300 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ delete  │ --all                                                                                                                                             │ minikube             │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:29 UTC │ 10 Dec 25 05:29 UTC │
	│ delete  │ -p download-only-114300                                                                                                                           │ download-only-114300 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:29 UTC │ 10 Dec 25 05:29 UTC │
	│ start   │ -o=json --download-only -p download-only-321200 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=docker --driver=docker │ download-only-321200 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:29:10
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:29:10.188133    2352 out.go:360] Setting OutFile to fd 704 ...
	I1210 05:29:10.230612    2352 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:29:10.230612    2352 out.go:374] Setting ErrFile to fd 716...
	I1210 05:29:10.230612    2352 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:29:10.243993    2352 out.go:368] Setting JSON to true
	I1210 05:29:10.246618    2352 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3482,"bootTime":1765341067,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 05:29:10.246618    2352 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 05:29:10.251655    2352 out.go:99] [download-only-321200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 05:29:10.252358    2352 notify.go:221] Checking for updates...
	I1210 05:29:10.253497    2352 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:29:10.256235    2352 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 05:29:10.258472    2352 out.go:171] MINIKUBE_LOCATION=22094
	I1210 05:29:10.260932    2352 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1210 05:29:10.264942    2352 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 05:29:10.265101    2352 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:29:10.380025    2352 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 05:29:10.383668    2352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:29:10.619131    2352 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-10 05:29:10.601286616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:29:10.622185    2352 out.go:99] Using the docker driver based on user configuration
	I1210 05:29:10.622185    2352 start.go:309] selected driver: docker
	I1210 05:29:10.622185    2352 start.go:927] validating driver "docker" against <nil>
	I1210 05:29:10.628863    2352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:29:11.012283    2352 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-10 05:29:10.993826638 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:29:11.012283    2352 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:29:11.048875    2352 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1210 05:29:11.049472    2352 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 05:29:11.613641    2352 out.go:171] Using Docker Desktop driver with root privileges
	I1210 05:29:11.617091    2352 cni.go:84] Creating CNI manager for ""
	I1210 05:29:11.617091    2352 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 05:29:11.617091    2352 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 05:29:11.617091    2352 start.go:353] cluster config:
	{Name:download-only-321200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:download-only-321200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:29:11.621627    2352 out.go:99] Starting "download-only-321200" primary control-plane node in "download-only-321200" cluster
	I1210 05:29:11.621699    2352 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 05:29:11.623996    2352 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:29:11.624523    2352 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 05:29:11.624567    2352 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	W1210 05:29:11.665244    2352 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 05:29:11.679804    2352 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1210 05:29:11.680684    2352 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765275396-22083@sha256_ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar
	I1210 05:29:11.680954    2352 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765275396-22083@sha256_ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar
	I1210 05:29:11.681004    2352 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1210 05:29:11.681039    2352 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1210 05:29:11.681039    2352 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1210 05:29:11.681039    2352 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	W1210 05:29:11.874618    2352 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
	I1210 05:29:11.874618    2352 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.34.3
	I1210 05:29:11.874618    2352 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.12.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.12.1
	I1210 05:29:11.874618    2352 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.34.3
	I1210 05:29:11.874618    2352 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-321200\config.json ...
	I1210 05:29:11.874618    2352 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1210 05:29:11.874618    2352 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.34.3
	I1210 05:29:11.874618    2352 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1210 05:29:11.874618    2352 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1210 05:29:11.874618    2352 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.34.3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.34.3
	I1210 05:29:11.874618    2352 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-321200\config.json: {Name:mkc8bf9d424aed4c1cd97482b9808b0586bcc71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:11.876686    2352 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
	I1210 05:29:11.880246    2352 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubeadm
	I1210 05:29:11.880411    2352 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubectl
	I1210 05:29:11.880411    2352 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.34.3/kubelet
	I1210 05:29:14.677656    2352 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.3/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v1.34.3/kubectl.exe
	I1210 05:29:14.732540    2352 cache.go:107] acquiring lock: {Name:mk6b392ee3857c8c549222e5d4bc0d459f0d6374 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:29:14.732622    2352 cache.go:107] acquiring lock: {Name:mkc3ba12d2dc6e754bbb22b72e061ebdaf85fee6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:29:14.732622    2352 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:29:14.733396    2352 cache.go:107] acquiring lock: {Name:mk2c17fa70a505b9214cef4e32cab64efc0821c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:29:14.734014    2352 cache.go:107] acquiring lock: {Name:mk4347e5873954a8abe84535c3dbf0018de433f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:29:14.734014    2352 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:14.734725    2352 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.3
	I1210 05:29:14.734725    2352 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:14.734725    2352 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1210 05:29:14.734725    2352 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:14.739563    2352 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:29:14.739830    2352 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:14.740670    2352 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:29:14.741990    2352 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:14.755788    2352 cache.go:107] acquiring lock: {Name:mk712909f8cebe825fb8a435a44c90c8d2ca7c32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 05:29:14.756797    2352 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:14.758797    2352 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1210 05:29:14.758797    2352 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 05:29:14.758797    2352 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1210 05:29:14.758797    2352 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.3
	I1210 05:29:14.758797    2352 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1210 05:29:14.763799    2352 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.3
	I1210 05:29:14.765795    2352 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.3
	I1210 05:29:14.773793    2352 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.3
	W1210 05:29:14.927359    2352 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 05:29:14.977869    2352 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.12.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 05:29:15.027912    2352 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 05:29:15.076379    2352 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 05:29:15.125377    2352 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1210 05:29:15.193751    2352 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.34.3 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	
	
	* The control-plane node download-only-321200 host does not exist
	  To start a cluster, run: "minikube start -p download-only-321200"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (1.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-321200
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (14.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-506000 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-506000 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=docker --driver=docker: (14.5580947s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (14.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1210 05:29:34.251056   11304 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
I1210 05:29:34.251203   11304 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
--- PASS: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-506000
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-506000: exit status 85 (198.4528ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │       PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-114300 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker      │ download-only-114300 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                  │ minikube             │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:29 UTC │ 10 Dec 25 05:29 UTC │
	│ delete  │ -p download-only-114300                                                                                                                                │ download-only-114300 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:29 UTC │ 10 Dec 25 05:29 UTC │
	│ start   │ -o=json --download-only -p download-only-321200 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=docker --driver=docker      │ download-only-321200 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                  │ minikube             │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:29 UTC │ 10 Dec 25 05:29 UTC │
	│ delete  │ -p download-only-321200                                                                                                                                │ download-only-321200 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:29 UTC │ 10 Dec 25 05:29 UTC │
	│ start   │ -o=json --download-only -p download-only-506000 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=docker --driver=docker │ download-only-506000 │ minikube4\jenkins │ v1.37.0 │ 10 Dec 25 05:29 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 05:29:19
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 05:29:19.765523    9316 out.go:360] Setting OutFile to fd 1380 ...
	I1210 05:29:19.808550    9316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:29:19.808550    9316 out.go:374] Setting ErrFile to fd 1376...
	I1210 05:29:19.808550    9316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:29:19.823157    9316 out.go:368] Setting JSON to true
	I1210 05:29:19.825761    9316 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3491,"bootTime":1765341067,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 05:29:19.825761    9316 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 05:29:19.832415    9316 out.go:99] [download-only-506000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 05:29:19.832415    9316 notify.go:221] Checking for updates...
	I1210 05:29:19.835285    9316 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:29:19.837620    9316 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 05:29:19.839733    9316 out.go:171] MINIKUBE_LOCATION=22094
	I1210 05:29:19.841839    9316 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1210 05:29:19.846844    9316 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 05:29:19.847099    9316 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:29:19.964488    9316 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 05:29:19.969833    9316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:29:20.196036    9316 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-10 05:29:20.177294661 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:29:20.199433    9316 out.go:99] Using the docker driver based on user configuration
	I1210 05:29:20.199504    9316 start.go:309] selected driver: docker
	I1210 05:29:20.199504    9316 start.go:927] validating driver "docker" against <nil>
	I1210 05:29:20.206300    9316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:29:20.443634    9316 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:78 SystemTime:2025-12-10 05:29:20.425648694 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:29:20.444156    9316 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 05:29:20.478937    9316 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1210 05:29:20.479584    9316 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 05:29:20.814814    9316 out.go:171] Using Docker Desktop driver with root privileges
	I1210 05:29:20.816654    9316 cni.go:84] Creating CNI manager for ""
	I1210 05:29:20.817626    9316 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1210 05:29:20.817712    9316 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 05:29:20.817978    9316 start.go:353] cluster config:
	{Name:download-only-506000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:download-only-506000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:29:20.820321    9316 out.go:99] Starting "download-only-506000" primary control-plane node in "download-only-506000" cluster
	I1210 05:29:20.820402    9316 cache.go:134] Beginning downloading kic base image for docker with docker
	I1210 05:29:20.822746    9316 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1210 05:29:20.822746    9316 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 05:29:20.822746    9316 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1210 05:29:20.853774    9316 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1210 05:29:20.853822    9316 cache.go:65] Caching tarball of preloaded images
	I1210 05:29:20.854356    9316 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 05:29:20.856615    9316 out.go:99] Downloading Kubernetes v1.35.0-rc.1 preload ...
	I1210 05:29:20.856687    9316 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1210 05:29:20.880719    9316 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1210 05:29:20.881089    9316 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765275396-22083@sha256_ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar
	I1210 05:29:20.881089    9316 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765275396-22083@sha256_ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f.tar
	I1210 05:29:20.881089    9316 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1210 05:29:20.881089    9316 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1210 05:29:20.881089    9316 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1210 05:29:20.881727    9316 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1210 05:29:20.928522    9316 preload.go:295] Got checksum from GCS API "69672a26de652c41c080c5ec079f9718"
	I1210 05:29:20.929045    9316 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4?checksum=md5:69672a26de652c41c080c5ec079f9718 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1210 05:29:23.702153    9316 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on docker
	I1210 05:29:23.703017    9316 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-506000\config.json ...
	I1210 05:29:23.703017    9316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-506000\config.json: {Name:mk71328e47c4933fb39ff93137f01caeb6b351a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 05:29:23.704727    9316 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime docker
	I1210 05:29:23.705200    9316 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v1.35.0-rc.1/kubectl.exe
	
	
	* The control-plane node download-only-506000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-506000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-506000
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.54s)

                                                
                                    
x
+
TestBinaryMirror (4.29s)

                                                
                                                
=== RUN   TestBinaryMirror
I1210 05:29:41.375647   11304 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-154100 --alsologtostderr --binary-mirror http://127.0.0.1:64862 --driver=docker
aaa_download_only_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-154100 --alsologtostderr --binary-mirror http://127.0.0.1:64862 --driver=docker: (3.5286768s)
helpers_test.go:176: Cleaning up "binary-mirror-154100" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-154100
--- PASS: TestBinaryMirror (4.29s)

                                                
                                    
x
+
TestOffline (147.91s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-513700 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-513700 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker: (2m23.7335938s)
helpers_test.go:176: Cleaning up "offline-docker-513700" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-513700
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-513700: (4.1731012s)
--- PASS: TestOffline (147.91s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.23s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-949500
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-949500: exit status 85 (225.9688ms)

                                                
                                                
-- stdout --
	* Profile "addons-949500" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-949500"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.23s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-949500
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-949500: exit status 85 (219.592ms)

                                                
                                                
-- stdout --
	* Profile "addons-949500" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-949500"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/Setup (299.68s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-949500 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-949500 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (4m59.6746881s)
--- PASS: TestAddons/Setup (299.68s)

                                                
                                    
x
+
TestAddons/serial/Volcano (50.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:870: volcano-scheduler stabilized in 16.1853ms
addons_test.go:878: volcano-admission stabilized in 16.1853ms
addons_test.go:886: volcano-controller stabilized in 16.1853ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-5bvpr" [232d061a-c4ad-4473-ac6b-cfb2c5aec15e] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.0074203s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-822wk" [fdd6e44d-02a7-4a70-ae91-7edc095f1fbd] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.0071039s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-8pjdf" [17dee52e-95b7-40cc-9818-02b338614cc5] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.0063237s
addons_test.go:905: (dbg) Run:  kubectl --context addons-949500 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-949500 create -f testdata\vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-949500 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [3e73dfb4-fe58-40c6-b617-b588c55e14e6] Pending
helpers_test.go:353: "test-job-nginx-0" [3e73dfb4-fe58-40c6-b617-b588c55e14e6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [3e73dfb4-fe58-40c6-b617-b588c55e14e6] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 21.0066336s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-949500 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-949500 addons disable volcano --alsologtostderr -v=1: (12.4767768s)
--- PASS: TestAddons/serial/Volcano (50.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-949500 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-949500 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-949500 create -f testdata\busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-949500 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [35850049-264b-404e-82ed-3c378a995aef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [35850049-264b-404e-82ed-3c378a995aef] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.0066077s
addons_test.go:696: (dbg) Run:  kubectl --context addons-949500 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-949500 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-949500 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-949500 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.16s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (1.61s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 55.1227ms
addons_test.go:327: (dbg) Run:  out/minikube-windows-amd64.exe addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-949500
addons_test.go:334: (dbg) Run:  kubectl --context addons-949500 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-949500 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-949500 addons disable registry-creds --alsologtostderr -v=1: (1.0339561s)
--- PASS: TestAddons/parallel/RegistryCreds (1.61s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.35s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-lxmlp" [85c6083c-2f40-4bf0-8245-5d4e4d3e80ad] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0395065s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-949500 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-949500 addons disable inspektor-gadget --alsologtostderr -v=1: (6.3050379s)
--- PASS: TestAddons/parallel/InspektorGadget (12.35s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.49s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 39.3315ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-2lc75" [275e3ca6-1380-4889-9943-dbf5a7593cee] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0347119s
addons_test.go:465: (dbg) Run:  kubectl --context addons-949500 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-949500 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-949500 addons disable metrics-server --alsologtostderr -v=1: (2.2809184s)
--- PASS: TestAddons/parallel/MetricsServer (7.49s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1210 05:36:12.650689   11304 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1210 05:36:12.705815   11304 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1210 05:36:12.705815   11304 kapi.go:107] duration metric: took 55.1247ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 55.1247ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-949500 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-949500 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [737eb90e-3428-4ab1-9f1b-9abfe87c601b] Pending
helpers_test.go:353: "task-pv-pod" [737eb90e-3428-4ab1-9f1b-9abfe87c601b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [737eb90e-3428-4ab1-9f1b-9abfe87c601b] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.0060706s
addons_test.go:574: (dbg) Run:  kubectl --context addons-949500 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-949500 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-949500 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-949500 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-949500 delete pod task-pv-pod: (1.3656528s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-949500 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-949500 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-949500 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [5e7e2519-5f6c-4804-908f-2f8d45649da7] Pending
helpers_test.go:353: "task-pv-pod-restore" [5e7e2519-5f6c-4804-908f-2f8d45649da7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [5e7e2519-5f6c-4804-908f-2f8d45649da7] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.0073915s
addons_test.go:616: (dbg) Run:  kubectl --context addons-949500 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-949500 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-949500 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-949500 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-949500 addons disable volumesnapshots --alsologtostderr -v=1: (1.2480852s)
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-949500 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-949500 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.7421627s)
--- PASS: TestAddons/parallel/CSI (58.74s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (29.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-949500 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-949500 --alsologtostderr -v=1: (1.2698641s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-rj9k4" [72caad29-ff61-4ce3-8971-2c9728884ac8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-rj9k4" [72caad29-ff61-4ce3-8971-2c9728884ac8] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 22.0050418s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-949500 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-949500 addons disable headlamp --alsologtostderr -v=1: (6.5407689s)
--- PASS: TestAddons/parallel/Headlamp (29.82s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.03s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-q49f2" [ac0bd1cd-8c43-4dec-b8de-3c10e8734703] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0072806s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-949500 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-949500 addons disable cloud-spanner --alsologtostderr -v=1: (1.016516s)
--- PASS: TestAddons/parallel/CloudSpanner (6.03s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.03s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-949500 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-949500 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-949500 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [853d356a-b87f-4a7e-bb55-5335f07e5ef7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [853d356a-b87f-4a7e-bb55-5335f07e5ef7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [853d356a-b87f-4a7e-bb55-5335f07e5ef7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0053182s
addons_test.go:969: (dbg) Run:  kubectl --context addons-949500 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-949500 ssh "cat /opt/local-path-provisioner/pvc-c7b2108a-5960-4f74-b79d-77f19fc7c2a1_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-949500 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-949500 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-949500 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-949500 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.3287228s)
--- PASS: TestAddons/parallel/LocalPath (57.03s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.88s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-lknj2" [30c9c6ce-9642-4962-90c9-f6618aa0b119] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0136755s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-949500 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-949500 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.861327s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.88s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (13.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-d2fnm" [7b6d2c59-ef22-4541-8cdc-d904eb71e7f3] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.057626s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-949500 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-949500 addons disable yakd --alsologtostderr -v=1: (7.2725956s)
--- PASS: TestAddons/parallel/Yakd (13.33s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (7.65s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-lzz4v" [f4337876-09a1-4839-b29a-b961ea3318a5] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.0129433s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-949500 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-949500 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: (1.6378314s)
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (7.65s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.88s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-949500
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-949500: (12.0699156s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-949500
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-949500
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-949500
--- PASS: TestAddons/StoppedEnableDisable (12.88s)

                                                
                                    
x
+
TestCertOptions (80.71s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-451200 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
E1210 07:14:02.021481   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:14:18.945834   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-451200 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (1m7.3890691s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-451200 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
I1210 07:15:02.946559   11304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-451200
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-451200 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-451200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-451200
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-451200: (12.0841821s)
--- PASS: TestCertOptions (80.71s)

                                                
                                    
x
+
TestCertExpiration (286.42s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-804900 --memory=3072 --cert-expiration=3m --driver=docker
E1210 07:13:02.328837   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-804900 --memory=3072 --cert-expiration=3m --driver=docker: (1m4.5842346s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-804900 --memory=3072 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-804900 --memory=3072 --cert-expiration=8760h --driver=docker: (35.7938724s)
helpers_test.go:176: Cleaning up "cert-expiration-804900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-804900
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-804900: (6.0377782s)
--- PASS: TestCertExpiration (286.42s)

                                                
                                    
x
+
TestDockerFlags (82.76s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-112800 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-112800 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (1m14.736882s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-112800 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-112800 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-112800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-112800
E1210 07:14:45.960701   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-112800: (6.8266849s)
--- PASS: TestDockerFlags (82.76s)

                                                
                                    
x
+
TestForceSystemdFlag (59.76s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-516800 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-516800 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker: (54.9297613s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-516800 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-516800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-516800
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-516800: (4.1494018s)
--- PASS: TestForceSystemdFlag (59.76s)

                                                
                                    
x
+
TestForceSystemdEnv (78.89s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-618500 --memory=3072 --alsologtostderr -v=5 --driver=docker
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-618500 --memory=3072 --alsologtostderr -v=5 --driver=docker: (1m14.6004623s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-618500 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-618500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-618500
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-618500: (3.6406557s)
--- PASS: TestForceSystemdEnv (78.89s)

                                                
                                    
x
+
TestErrorSpam/start (2.55s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-259400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-259400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-259400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 start --dry-run
--- PASS: TestErrorSpam/start (2.55s)

                                                
                                    
x
+
TestErrorSpam/status (2.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-259400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 status
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-259400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 status
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-259400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 status
--- PASS: TestErrorSpam/status (2.14s)

                                                
                                    
x
+
TestErrorSpam/pause (2.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-259400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 pause
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-259400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 pause: (1.2751946s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-259400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-259400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 pause
--- PASS: TestErrorSpam/pause (2.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.53s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-259400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-259400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-259400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 unpause
--- PASS: TestErrorSpam/unpause (2.53s)

                                                
                                    
x
+
TestErrorSpam/stop (19.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-259400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-259400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 stop: (11.9776945s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-259400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-259400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 stop: (3.5665466s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-259400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 stop
error_spam_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-259400 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-259400 stop: (3.9321261s)
--- PASS: TestErrorSpam/stop (19.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (97.11s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-493600 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker
E1210 05:39:45.882466   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:39:45.890474   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:39:45.903467   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:39:45.926465   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:39:45.969461   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:39:46.052461   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:39:46.214738   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:39:46.536756   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:39:47.179749   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:39:48.462469   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:39:51.024730   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:39:56.147318   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:40:06.389396   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 05:40:26.872035   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-493600 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker: (1m37.1087763s)
--- PASS: TestFunctional/serial/StartWithProxy (97.11s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (47.54s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1210 05:40:50.133314   11304 config.go:182] Loaded profile config "functional-493600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-493600 --alsologtostderr -v=8
E1210 05:41:07.834754   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-493600 --alsologtostderr -v=8: (47.5396526s)
functional_test.go:678: soft start took 47.5405512s for "functional-493600" cluster.
I1210 05:41:37.673560   11304 config.go:182] Loaded profile config "functional-493600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (47.54s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.10s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-493600 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-493600 cache add registry.k8s.io/pause:3.1: (3.1818378s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-493600 cache add registry.k8s.io/pause:3.3: (3.1205578s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-493600 cache add registry.k8s.io/pause:latest: (3.0547214s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-493600 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local787581072\001
functional_test.go:1092: (dbg) Done: docker build -t minikube-local-cache-test:functional-493600 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local787581072\001: (1.3389669s)
functional_test.go:1104: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 cache add minikube-local-cache-test:functional-493600
functional_test.go:1104: (dbg) Done: out/minikube-windows-amd64.exe -p functional-493600 cache add minikube-local-cache-test:functional-493600: (2.5950125s)
functional_test.go:1109: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 cache delete minikube-local-cache-test:functional-493600
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-493600
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (4.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-493600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (575.7414ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-windows-amd64.exe -p functional-493600 cache reload: (2.7321689s)
functional_test.go:1178: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (4.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.37s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 kubectl -- --context functional-493600 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.36s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.18s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out\kubectl.exe --context functional-493600 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.18s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (52.57s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-493600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1210 05:42:29.759002   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-493600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (52.5681029s)
functional_test.go:776: restart took 52.5681584s for "functional-493600" cluster.
I1210 05:42:52.528648   11304 config.go:182] Loaded profile config "functional-493600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (52.57s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-493600 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 logs
functional_test.go:1251: (dbg) Done: out/minikube-windows-amd64.exe -p functional-493600 logs: (1.7567891s)
--- PASS: TestFunctional/serial/LogsCmd (1.76s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.8s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3068324692\001\logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe -p functional-493600 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3068324692\001\logs.txt: (1.7996622s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.80s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.13s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-493600 apply -f testdata\invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-493600
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-493600: exit status 115 (1.0466593s)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31846 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_service_9c977cb937a5c6299cc91c983e64e702e081bf76_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-493600 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (5.13s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-493600 config get cpus: exit status 14 (194.7205ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-493600 config get cpus: exit status 14 (170.997ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-493600 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-493600 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (720.8768ms)

                                                
                                                
-- stdout --
	* [functional-493600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:43:18.765969    4208 out.go:360] Setting OutFile to fd 1564 ...
	I1210 05:43:18.827964    4208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:43:18.827964    4208 out.go:374] Setting ErrFile to fd 1504...
	I1210 05:43:18.827964    4208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:43:18.842982    4208 out.go:368] Setting JSON to false
	I1210 05:43:18.845970    4208 start.go:133] hostinfo: {"hostname":"minikube4","uptime":4330,"bootTime":1765341067,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 05:43:18.845970    4208 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 05:43:18.849965    4208 out.go:179] * [functional-493600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 05:43:18.851964    4208 notify.go:221] Checking for updates...
	I1210 05:43:18.852964    4208 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:43:18.860966    4208 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:43:18.862966    4208 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 05:43:18.864964    4208 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:43:18.867974    4208 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:43:18.869976    4208 config.go:182] Loaded profile config "functional-493600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 05:43:18.870980    4208 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:43:18.993978    4208 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 05:43:18.997974    4208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:43:19.280859    4208 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-10 05:43:19.257579074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:43:19.283850    4208 out.go:179] * Using the docker driver based on existing profile
	I1210 05:43:19.287847    4208 start.go:309] selected driver: docker
	I1210 05:43:19.287847    4208 start.go:927] validating driver "docker" against &{Name:functional-493600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-493600 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:43:19.287847    4208 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:43:19.336871    4208 out.go:203] 
	W1210 05:43:19.338861    4208 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 05:43:19.341867    4208 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-493600 --dry-run --alsologtostderr -v=1 --driver=docker
--- PASS: TestFunctional/parallel/DryRun (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-493600 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-493600 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (655.1906ms)

                                                
                                                
-- stdout --
	* [functional-493600] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 05:43:18.385783    7292 out.go:360] Setting OutFile to fd 1804 ...
	I1210 05:43:18.433794    7292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:43:18.433794    7292 out.go:374] Setting ErrFile to fd 1940...
	I1210 05:43:18.433794    7292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 05:43:18.446772    7292 out.go:368] Setting JSON to false
	I1210 05:43:18.449776    7292 start.go:133] hostinfo: {"hostname":"minikube4","uptime":4330,"bootTime":1765341068,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 05:43:18.449776    7292 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 05:43:18.453769    7292 out.go:179] * [functional-493600] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 05:43:18.456778    7292 notify.go:221] Checking for updates...
	I1210 05:43:18.458780    7292 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 05:43:18.460783    7292 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 05:43:18.462770    7292 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 05:43:18.464776    7292 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 05:43:18.466786    7292 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 05:43:18.469779    7292 config.go:182] Loaded profile config "functional-493600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 05:43:18.470779    7292 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 05:43:18.584769    7292 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 05:43:18.588770    7292 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 05:43:18.852964    7292 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-10 05:43:18.833970398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 05:43:18.854964    7292 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1210 05:43:18.860966    7292 start.go:309] selected driver: docker
	I1210 05:43:18.860966    7292 start.go:927] validating driver "docker" against &{Name:functional-493600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-493600 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 05:43:18.860966    7292 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 05:43:18.911971    7292 out.go:203] 
	W1210 05:43:18.913964    7292 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 05:43:18.916968    7292 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 status
functional_test.go:875: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [c6048eaf-a873-4e04-af9b-0c8592b63149] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0054653s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-493600 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-493600 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-493600 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-493600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [1a7efc3e-3b13-45e3-9b5d-550734d5baaf] Pending
helpers_test.go:353: "sp-pod" [1a7efc3e-3b13-45e3-9b5d-550734d5baaf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [1a7efc3e-3b13-45e3-9b5d-550734d5baaf] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.0057403s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-493600 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-493600 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-493600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [359da443-0992-4a0f-94f8-8f8faf699610] Pending
helpers_test.go:353: "sp-pod" [359da443-0992-4a0f-94f8-8f8faf699610] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [359da443-0992-4a0f-94f8-8f8faf699610] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0073994s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-493600 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.79s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (3.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 ssh -n functional-493600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 cp functional-493600:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd2835005705\001\cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 ssh -n functional-493600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 ssh -n functional-493600 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (3.48s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (75.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-493600 replace --force -f testdata\mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-x5mcs" [ec5dde58-5014-4973-8759-4a59a51fef30] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-x5mcs" [ec5dde58-5014-4973-8759-4a59a51fef30] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 52.0117325s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-493600 exec mysql-6bcdcbc558-x5mcs -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-493600 exec mysql-6bcdcbc558-x5mcs -- mysql -ppassword -e "show databases;": exit status 1 (210.8743ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:44:29.189252   11304 retry.go:31] will retry after 1.122458676s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-493600 exec mysql-6bcdcbc558-x5mcs -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-493600 exec mysql-6bcdcbc558-x5mcs -- mysql -ppassword -e "show databases;": exit status 1 (202.1265ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:44:30.519289   11304 retry.go:31] will retry after 1.879558525s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-493600 exec mysql-6bcdcbc558-x5mcs -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-493600 exec mysql-6bcdcbc558-x5mcs -- mysql -ppassword -e "show databases;": exit status 1 (287.842ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:44:32.691734   11304 retry.go:31] will retry after 2.099606052s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-493600 exec mysql-6bcdcbc558-x5mcs -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-493600 exec mysql-6bcdcbc558-x5mcs -- mysql -ppassword -e "show databases;": exit status 1 (194.2812ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:44:34.990561   11304 retry.go:31] will retry after 2.544455447s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-493600 exec mysql-6bcdcbc558-x5mcs -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-493600 exec mysql-6bcdcbc558-x5mcs -- mysql -ppassword -e "show databases;": exit status 1 (247.6297ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:44:37.787592   11304 retry.go:31] will retry after 4.922644796s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-493600 exec mysql-6bcdcbc558-x5mcs -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-493600 exec mysql-6bcdcbc558-x5mcs -- mysql -ppassword -e "show databases;": exit status 1 (196.1126ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 05:44:42.912104   11304 retry.go:31] will retry after 8.829092601s: exit status 1
E1210 05:44:45.885980   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1812: (dbg) Run:  kubectl --context functional-493600 exec mysql-6bcdcbc558-x5mcs -- mysql -ppassword -e "show databases;"
E1210 05:45:13.602612   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (75.34s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/11304/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 ssh "sudo cat /etc/test/nested/copy/11304/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/11304.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 ssh "sudo cat /etc/ssl/certs/11304.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/11304.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 ssh "sudo cat /usr/share/ca-certificates/11304.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/113042.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 ssh "sudo cat /etc/ssl/certs/113042.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/113042.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 ssh "sudo cat /usr/share/ca-certificates/113042.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (3.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-493600 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-493600 ssh "sudo systemctl is-active crio": exit status 1 (571.7268ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2293: (dbg) Done: out/minikube-windows-amd64.exe license: (1.3629605s)
--- PASS: TestFunctional/parallel/License (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-493600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-493600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-493600 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 5056: OpenProcess: The parameter is incorrect.
helpers_test.go:526: unable to kill pid 256: OpenProcess: The parameter is incorrect.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-493600 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-493600 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-493600 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [d7f54fd4-de56-40a6-a24a-8c2e917d2cff] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [d7f54fd4-de56-40a6-a24a-8c2e917d2cff] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.0059241s
I1210 05:43:15.651208   11304 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-493600 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-493600 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-5znph" [c2885d47-f3b8-4f52-88b7-d5da8a74ea5f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-5znph" [c2885d47-f3b8-4f52-88b7-d5da8a74ea5f] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.0059128s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-493600 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-493600 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 6992: OpenProcess: The parameter is incorrect.
helpers_test.go:526: unable to kill pid 5032: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1330: Took "710.0956ms" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1344: Took "155.9843ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1381: Took "686.8353ms" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1394: Took "155.2045ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 service list -o json
functional_test.go:1504: Took "880.896ms" to run "out/minikube-windows-amd64.exe -p functional-493600 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-493600 service --namespace=default --https --url hello-node: exit status 1 (15.0130467s)

                                                
                                                
-- stdout --
	https://127.0.0.1:49880

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1532: found endpoint: https://127.0.0.1:49880
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 version --short
--- PASS: TestFunctional/parallel/Version/short (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-493600 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-493600
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-493600
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-493600 image ls --format short --alsologtostderr:
I1210 05:43:39.220956    7140 out.go:360] Setting OutFile to fd 1760 ...
I1210 05:43:39.267293    7140 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:43:39.267293    7140 out.go:374] Setting ErrFile to fd 1632...
I1210 05:43:39.267293    7140 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:43:39.279431    7140 config.go:182] Loaded profile config "functional-493600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1210 05:43:39.279431    7140 config.go:182] Loaded profile config "functional-493600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1210 05:43:39.286427    7140 cli_runner.go:164] Run: docker container inspect functional-493600 --format={{.State.Status}}
I1210 05:43:39.340425    7140 ssh_runner.go:195] Run: systemctl --version
I1210 05:43:39.344432    7140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-493600
I1210 05:43:39.398436    7140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49600 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-493600\id_rsa Username:docker}
I1210 05:43:39.528436    7140 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-493600 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-scheduler              │ v1.34.3           │ aec12dadf56dd │ 52.8MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.3           │ aa27095f56193 │ 88MB   │
│ registry.k8s.io/kube-controller-manager     │ v1.34.3           │ 5826b25d990d7 │ 74.9MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.3           │ 36eef8e07bdd6 │ 71.9MB │
│ public.ecr.aws/nginx/nginx                  │ alpine            │ d4918ca78576a │ 52.8MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ localhost/my-image                          │ functional-493600 │ 008557e887258 │ 1.24MB │
│ docker.io/library/minikube-local-cache-test │ functional-493600 │ d574272a89487 │ 30B    │
│ docker.io/kicbase/echo-server               │ functional-493600 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-493600 image ls --format table --alsologtostderr:
I1210 05:43:50.300601    3800 out.go:360] Setting OutFile to fd 1832 ...
I1210 05:43:50.350624    3800 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:43:50.350624    3800 out.go:374] Setting ErrFile to fd 1644...
I1210 05:43:50.350624    3800 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:43:50.362593    3800 config.go:182] Loaded profile config "functional-493600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1210 05:43:50.362593    3800 config.go:182] Loaded profile config "functional-493600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1210 05:43:50.369593    3800 cli_runner.go:164] Run: docker container inspect functional-493600 --format={{.State.Status}}
I1210 05:43:50.437599    3800 ssh_runner.go:195] Run: systemctl --version
I1210 05:43:50.441600    3800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-493600
I1210 05:43:50.495602    3800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49600 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-493600\id_rsa Username:docker}
I1210 05:43:50.669778    3800 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-493600 image ls --format json --alsologtostderr:
[{"id":"aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"52800000"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"52800000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"d574272a89487e8e15eb47ce4c2cbdd3ef8cee3866941ae3c53dd7b85fb6b006","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-493600"],"size":"30"},{"id":"5826b25d990d7d314d236c8d128f43e44358
3891f5cdffa7bf8bca50ae9e0942","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"74900000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"008557e8872587a4addf126fdb1db812564cc922a591a2cc32858bd2d8292b96","repoDigests":[],"repoTags":["localhost/my-image:functional-493600"],"size":"1240000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-493600","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa
1a35cf691","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"71900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"88000000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-493600 image ls --format json --alsologtostderr:
I1210 05:43:49.843027    1068 out.go:360] Setting OutFile to fd 1360 ...
I1210 05:43:49.886021    1068 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:43:49.886021    1068 out.go:374] Setting ErrFile to fd 1880...
I1210 05:43:49.886021    1068 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:43:49.897032    1068 config.go:182] Loaded profile config "functional-493600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1210 05:43:49.897032    1068 config.go:182] Loaded profile config "functional-493600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1210 05:43:49.904016    1068 cli_runner.go:164] Run: docker container inspect functional-493600 --format={{.State.Status}}
I1210 05:43:49.965032    1068 ssh_runner.go:195] Run: systemctl --version
I1210 05:43:49.968020    1068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-493600
I1210 05:43:50.025028    1068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49600 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-493600\id_rsa Username:docker}
I1210 05:43:50.146024    1068 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-493600 image ls --format yaml --alsologtostderr:
- id: 36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "71900000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-493600
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "88000000"
- id: 5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "74900000"
- id: aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "52800000"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "52800000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: d574272a89487e8e15eb47ce4c2cbdd3ef8cee3866941ae3c53dd7b85fb6b006
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-493600
size: "30"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-493600 image ls --format yaml --alsologtostderr:
I1210 05:43:39.704444    6280 out.go:360] Setting OutFile to fd 1336 ...
I1210 05:43:39.768440    6280 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:43:39.768440    6280 out.go:374] Setting ErrFile to fd 1436...
I1210 05:43:39.768440    6280 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:43:39.783445    6280 config.go:182] Loaded profile config "functional-493600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1210 05:43:39.784446    6280 config.go:182] Loaded profile config "functional-493600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1210 05:43:39.794434    6280 cli_runner.go:164] Run: docker container inspect functional-493600 --format={{.State.Status}}
I1210 05:43:39.872902    6280 ssh_runner.go:195] Run: systemctl --version
I1210 05:43:39.877910    6280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-493600
I1210 05:43:39.943910    6280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49600 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-493600\id_rsa Username:docker}
I1210 05:43:40.078907    6280 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (9.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-493600 ssh pgrep buildkitd: exit status 1 (656.4704ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 image build -t localhost/my-image:functional-493600 testdata\build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-windows-amd64.exe -p functional-493600 image build -t localhost/my-image:functional-493600 testdata\build --alsologtostderr: (8.5025761s)
functional_test.go:338: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-493600 image build -t localhost/my-image:functional-493600 testdata\build --alsologtostderr:
I1210 05:43:40.915397    6376 out.go:360] Setting OutFile to fd 1668 ...
I1210 05:43:41.000216    6376 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:43:41.000216    6376 out.go:374] Setting ErrFile to fd 1588...
I1210 05:43:41.000303    6376 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 05:43:41.015248    6376 config.go:182] Loaded profile config "functional-493600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1210 05:43:41.046263    6376 config.go:182] Loaded profile config "functional-493600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1210 05:43:41.055252    6376 cli_runner.go:164] Run: docker container inspect functional-493600 --format={{.State.Status}}
I1210 05:43:41.138276    6376 ssh_runner.go:195] Run: systemctl --version
I1210 05:43:41.142260    6376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-493600
I1210 05:43:41.211263    6376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49600 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-493600\id_rsa Username:docker}
I1210 05:43:41.336255    6376 build_images.go:162] Building image from path: C:\Users\jenkins.minikube4\AppData\Local\Temp\build.376636542.tar
I1210 05:43:41.341258    6376 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 05:43:41.365274    6376 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.376636542.tar
I1210 05:43:41.375521    6376 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.376636542.tar: stat -c "%s %y" /var/lib/minikube/build/build.376636542.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.376636542.tar': No such file or directory
I1210 05:43:41.375521    6376 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\AppData\Local\Temp\build.376636542.tar --> /var/lib/minikube/build/build.376636542.tar (3072 bytes)
I1210 05:43:41.419530    6376 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.376636542
I1210 05:43:41.442541    6376 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.376636542 -xf /var/lib/minikube/build/build.376636542.tar
I1210 05:43:41.457539    6376 docker.go:361] Building image: /var/lib/minikube/build/build.376636542
I1210 05:43:41.461539    6376 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-493600 /var/lib/minikube/build/build.376636542
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B 0.0s done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 1.0s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.3s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 2.0s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 2.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.9s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:008557e8872587a4addf126fdb1db812564cc922a591a2cc32858bd2d8292b96
#8 writing image sha256:008557e8872587a4addf126fdb1db812564cc922a591a2cc32858bd2d8292b96 done
#8 naming to localhost/my-image:functional-493600 0.0s done
#8 DONE 0.2s
I1210 05:43:49.223138    6376 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-493600 /var/lib/minikube/build/build.376636542: (7.7615107s)
I1210 05:43:49.230294    6376 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.376636542
I1210 05:43:49.275516    6376 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.376636542.tar
I1210 05:43:49.294360    6376 build_images.go:218] Built localhost/my-image:functional-493600 from C:\Users\jenkins.minikube4\AppData\Local\Temp\build.376636542.tar
I1210 05:43:49.294360    6376 build_images.go:134] succeeded building to: functional-493600
I1210 05:43:49.294360    6376 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (9.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.5767301s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-493600
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 image load --daemon kicbase/echo-server:functional-493600 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-windows-amd64.exe -p functional-493600 image load --daemon kicbase/echo-server:functional-493600 --alsologtostderr: (2.5983307s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 image load --daemon kicbase/echo-server:functional-493600 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-493600 image load --daemon kicbase/echo-server:functional-493600 --alsologtostderr: (2.4538381s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-493600
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 image load --daemon kicbase/echo-server:functional-493600 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-493600 image load --daemon kicbase/echo-server:functional-493600 --alsologtostderr: (2.4754037s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.60s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (5.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-493600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-493600"
functional_test.go:514: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-493600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-493600": (3.0614322s)
functional_test.go:537: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-493600 docker-env | Invoke-Expression ; docker images"
functional_test.go:537: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-493600 docker-env | Invoke-Expression ; docker images": (2.2000521s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (5.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 image save kicbase/echo-server:functional-493600 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 image rm kicbase/echo-server:functional-493600 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-493600 service hello-node --url --format={{.IP}}: exit status 1 (15.0097705s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-493600
functional_test.go:439: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 image save --daemon kicbase/echo-server:functional-493600 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-493600
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-493600 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-493600 service hello-node --url: exit status 1 (15.0109942s)

                                                
                                                
-- stdout --
	http://127.0.0.1:49973

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1575: found endpoint for hello-node: http://127.0.0.1:49973
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.14s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-493600
--- PASS: TestFunctional/delete_echo-server_images (0.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-493600
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-493600
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\11304\hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (9.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 cache add registry.k8s.io/pause:3.1: (3.5105356s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 cache add registry.k8s.io/pause:3.3: (3.0910137s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 cache add registry.k8s.io/pause:latest: (3.0937304s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (9.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (3.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-871500 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC788455070\001
functional_test.go:1104: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 cache add minikube-local-cache-test:functional-871500
functional_test.go:1104: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 cache add minikube-local-cache-test:functional-871500: (2.5254633s)
functional_test.go:1109: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 cache delete minikube-local-cache-test:functional-871500
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-871500
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (3.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (4.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-871500 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (573.092ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 cache reload: (2.7248973s)
functional_test.go:1178: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (4.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 logs
functional_test.go:1251: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 logs: (1.2423678s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi246127156\001\logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi246127156\001\logs.txt: (1.3712076s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (1.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-871500 config get cpus: exit status 14 (148.264ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-871500 config get cpus: exit status 14 (147.673ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (1.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (1.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-871500 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-871500 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-rc.1: exit status 23 (600.3981ms)

                                                
                                                
-- stdout --
	* [functional-871500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:20:21.931845    9716 out.go:360] Setting OutFile to fd 1940 ...
	I1210 06:20:21.973840    9716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:20:21.973840    9716 out.go:374] Setting ErrFile to fd 564...
	I1210 06:20:21.973840    9716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:20:21.986839    9716 out.go:368] Setting JSON to false
	I1210 06:20:21.989846    9716 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6553,"bootTime":1765341068,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 06:20:21.990844    9716 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 06:20:21.995842    9716 out.go:179] * [functional-871500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 06:20:21.998836    9716 notify.go:221] Checking for updates...
	I1210 06:20:21.998836    9716 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 06:20:22.000836    9716 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:20:22.002836    9716 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 06:20:22.004838    9716 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:20:22.007836    9716 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:20:22.009840    9716 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 06:20:22.010845    9716 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:20:22.117837    9716 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 06:20:22.121405    9716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:20:22.366176    9716 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-10 06:20:22.349723178 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 06:20:22.369175    9716 out.go:179] * Using the docker driver based on existing profile
	I1210 06:20:22.372192    9716 start.go:309] selected driver: docker
	I1210 06:20:22.372192    9716 start.go:927] validating driver "docker" against &{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:20:22.372192    9716 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:20:22.408867    9716 out.go:203] 
	W1210 06:20:22.412043    9716 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 06:20:22.415804    9716 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-871500 --dry-run --alsologtostderr -v=1 --driver=docker --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (1.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-871500 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-871500 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-rc.1: exit status 23 (743.8658ms)

                                                
                                                
-- stdout --
	* [functional-871500] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:20:21.187924   11892 out.go:360] Setting OutFile to fd 1368 ...
	I1210 06:20:21.229052   11892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:20:21.229657   11892 out.go:374] Setting ErrFile to fd 1276...
	I1210 06:20:21.229688   11892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:20:21.243656   11892 out.go:368] Setting JSON to false
	I1210 06:20:21.245455   11892 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6553,"bootTime":1765341068,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1210 06:20:21.245455   11892 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1210 06:20:21.267218   11892 out.go:179] * [functional-871500] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1210 06:20:21.271157   11892 notify.go:221] Checking for updates...
	I1210 06:20:21.273018   11892 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1210 06:20:21.274947   11892 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 06:20:21.278162   11892 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1210 06:20:21.280653   11892 out.go:179]   - MINIKUBE_LOCATION=22094
	I1210 06:20:21.282846   11892 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 06:20:21.284833   11892 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
	I1210 06:20:21.286053   11892 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 06:20:21.468484   11892 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1210 06:20:21.472914   11892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 06:20:21.716490   11892 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-10 06:20:21.695591885 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1210 06:20:21.720038   11892 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1210 06:20:21.721784   11892 start.go:309] selected driver: docker
	I1210 06:20:21.721784   11892 start.go:927] validating driver "docker" against &{Name:functional-871500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-871500 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 06:20:21.722399   11892 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 06:20:21.808895   11892 out.go:203] 
	W1210 06:20:21.810715   11892 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 06:20:21.812827   11892 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (1.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (1.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (3.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 ssh -n functional-871500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 cp functional-871500:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm1996276364\001\cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 ssh -n functional-871500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 ssh -n functional-871500 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (3.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/11304/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 ssh "sudo cat /etc/test/nested/copy/11304/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (3.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/11304.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 ssh "sudo cat /etc/ssl/certs/11304.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/11304.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 ssh "sudo cat /usr/share/ca-certificates/11304.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/113042.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 ssh "sudo cat /etc/ssl/certs/113042.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/113042.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 ssh "sudo cat /usr/share/ca-certificates/113042.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (3.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-871500 ssh "sudo systemctl is-active crio": exit status 1 (561.1181ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (2.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2293: (dbg) Done: out/minikube-windows-amd64.exe license: (2.1701438s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (2.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1330: Took "663.0251ms" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1344: Took "151.7368ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1381: Took "650.7216ms" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1394: Took "183.9513ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (1.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 version -o=json --components: (1.8948757s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (1.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-871500 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-871500 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-871500
docker.io/kicbase/echo-server:functional-871500
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-871500 image ls --format short --alsologtostderr:
I1210 06:21:36.578061    6528 out.go:360] Setting OutFile to fd 1356 ...
I1210 06:21:36.649060    6528 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:21:36.649060    6528 out.go:374] Setting ErrFile to fd 1052...
I1210 06:21:36.649060    6528 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:21:36.677060    6528 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1210 06:21:36.678063    6528 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1210 06:21:36.687059    6528 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
I1210 06:21:36.746054    6528 ssh_runner.go:195] Run: systemctl --version
I1210 06:21:36.749067    6528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
I1210 06:21:36.798053    6528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
I1210 06:21:36.914903    6528 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-871500 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server               │ functional-871500 │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                        │ 3.6.6-0           │ 0a108f7189562 │ 62.5MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ docker.io/library/minikube-local-cache-test │ functional-871500 │ d574272a89487 │ 30B    │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-rc.1      │ 58865405a13bc │ 89.8MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-rc.1      │ 5032a56602e1b │ 75.8MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-rc.1      │ 73f80cdc073da │ 51.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-rc.1      │ af0321f3a4f38 │ 70.7MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1           │ aa5e3ebc0dfed │ 78.1MB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-871500 image ls --format table --alsologtostderr:
I1210 06:21:37.088265    3504 out.go:360] Setting OutFile to fd 1148 ...
I1210 06:21:37.153261    3504 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:21:37.153261    3504 out.go:374] Setting ErrFile to fd 1348...
I1210 06:21:37.153261    3504 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:21:37.165273    3504 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1210 06:21:37.166268    3504 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1210 06:21:37.173260    3504 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
I1210 06:21:37.228264    3504 ssh_runner.go:195] Run: systemctl --version
I1210 06:21:37.232270    3504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
I1210 06:21:37.282270    3504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
I1210 06:21:37.404023    3504 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-871500 image ls --format json --alsologtostderr:
[{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"89800000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-871500"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b
78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"d574272a89487e8e15eb47ce4c2cbdd3ef8cee3866941ae3c53dd7b85fb6b006","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-871500"],"size":"30"},{"id":"73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"51700000"},{"id":"5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"75800000"},{"id":"af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"70700000"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"62500000"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":[],"re
poTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"78100000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-871500 image ls --format json --alsologtostderr:
I1210 06:21:37.058271    4060 out.go:360] Setting OutFile to fd 1748 ...
I1210 06:21:37.106268    4060 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:21:37.106268    4060 out.go:374] Setting ErrFile to fd 2044...
I1210 06:21:37.106268    4060 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:21:37.118266    4060 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1210 06:21:37.118266    4060 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1210 06:21:37.126266    4060 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
I1210 06:21:37.181265    4060 ssh_runner.go:195] Run: systemctl --version
I1210 06:21:37.184262    4060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
I1210 06:21:37.234279    4060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
I1210 06:21:37.352539    4060 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-871500 image ls --format yaml --alsologtostderr:
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "62500000"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "78100000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-871500
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "89800000"
- id: 73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "51700000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d574272a89487e8e15eb47ce4c2cbdd3ef8cee3866941ae3c53dd7b85fb6b006
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-871500
size: "30"
- id: 5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "75800000"
- id: af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "70700000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-871500 image ls --format yaml --alsologtostderr:
I1210 06:21:36.579059   11280 out.go:360] Setting OutFile to fd 1368 ...
I1210 06:21:36.665052   11280 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:21:36.665052   11280 out.go:374] Setting ErrFile to fd 1084...
I1210 06:21:36.665052   11280 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:21:36.677060   11280 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1210 06:21:36.678063   11280 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1210 06:21:36.686058   11280 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
I1210 06:21:36.744066   11280 ssh_runner.go:195] Run: systemctl --version
I1210 06:21:36.748057   11280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
I1210 06:21:36.817068   11280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
I1210 06:21:36.931154   11280 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (5.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-871500 ssh pgrep buildkitd: exit status 1 (544.3654ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 image build -t localhost/my-image:functional-871500 testdata\build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 image build -t localhost/my-image:functional-871500 testdata\build --alsologtostderr: (4.3702813s)
functional_test.go:338: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-871500 image build -t localhost/my-image:functional-871500 testdata\build --alsologtostderr:
I1210 06:21:37.115262   11360 out.go:360] Setting OutFile to fd 1852 ...
I1210 06:21:37.163262   11360 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:21:37.163262   11360 out.go:374] Setting ErrFile to fd 1748...
I1210 06:21:37.163262   11360 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:21:37.184262   11360 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1210 06:21:37.187270   11360 config.go:182] Loaded profile config "functional-871500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-rc.1
I1210 06:21:37.194271   11360 cli_runner.go:164] Run: docker container inspect functional-871500 --format={{.State.Status}}
I1210 06:21:37.245273   11360 ssh_runner.go:195] Run: systemctl --version
I1210 06:21:37.248274   11360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-871500
I1210 06:21:37.296273   11360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50082 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-871500\id_rsa Username:docker}
I1210 06:21:37.421503   11360 build_images.go:162] Building image from path: C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1196847971.tar
I1210 06:21:37.427284   11360 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 06:21:37.454116   11360 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1196847971.tar
I1210 06:21:37.461125   11360 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1196847971.tar: stat -c "%s %y" /var/lib/minikube/build/build.1196847971.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1196847971.tar': No such file or directory
I1210 06:21:37.461125   11360 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1196847971.tar --> /var/lib/minikube/build/build.1196847971.tar (3072 bytes)
I1210 06:21:37.492621   11360 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1196847971
I1210 06:21:37.509621   11360 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1196847971 -xf /var/lib/minikube/build/build.1196847971.tar
I1210 06:21:37.525624   11360 docker.go:361] Building image: /var/lib/minikube/build/build.1196847971
I1210 06:21:37.529622   11360 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-871500 /var/lib/minikube/build/build.1196847971
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#4 ...

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B done
#5 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#4 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#4 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#4 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:dc4c57c1bac26c40d67f207708b312a814c4d5e6180d48299808c86c58d94e07
#8 writing image sha256:dc4c57c1bac26c40d67f207708b312a814c4d5e6180d48299808c86c58d94e07 done
#8 naming to localhost/my-image:functional-871500 0.0s done
#8 DONE 0.2s
I1210 06:21:41.340630   11360 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-871500 /var/lib/minikube/build/build.1196847971: (3.8109561s)
I1210 06:21:41.344075   11360 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1196847971
I1210 06:21:41.363894   11360 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1196847971.tar
I1210 06:21:41.377178   11360 build_images.go:218] Built localhost/my-image:functional-871500 from C:\Users\jenkins.minikube4\AppData\Local\Temp\build.1196847971.tar
I1210 06:21:41.377178   11360 build_images.go:134] succeeded building to: functional-871500
I1210 06:21:41.377178   11360 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (5.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-871500
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-871500 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (3.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 image load --daemon kicbase/echo-server:functional-871500 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 image load --daemon kicbase/echo-server:functional-871500 --alsologtostderr: (2.9872293s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (3.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (2.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 image load --daemon kicbase/echo-server:functional-871500 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 image load --daemon kicbase/echo-server:functional-871500 --alsologtostderr: (2.3475783s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (2.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (3.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-871500
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 image load --daemon kicbase/echo-server:functional-871500 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-871500 image load --daemon kicbase/echo-server:functional-871500 --alsologtostderr: (2.3635418s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (3.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 image save kicbase/echo-server:functional-871500 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 image rm kicbase/echo-server:functional-871500 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-871500
functional_test.go:439: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-871500 image save --daemon kicbase/echo-server:functional-871500 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-871500
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-871500
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-871500
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-871500
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (304.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker
E1210 06:24:18.902450   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:24:18.909356   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:24:18.921401   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:24:18.943148   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:24:18.984762   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:24:19.066679   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:24:19.228664   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:24:19.551049   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:24:20.193877   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:24:21.476461   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:24:24.038235   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:24:29.161134   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:24:39.402687   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:24:45.915904   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:24:59.885002   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:25:40.847746   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:26:05.356021   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:02.771800   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:28:02.288642   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe -p ha-762100 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker: (5m2.9418172s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 status --alsologtostderr -v 5
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-762100 status --alsologtostderr -v 5: (1.5643573s)
--- PASS: TestMultiControlPlane/serial/StartCluster (304.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (10.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe -p ha-762100 kubectl -- rollout status deployment/busybox: (4.9209564s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 kubectl -- exec busybox-7b57f96db7-58nkn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 kubectl -- exec busybox-7b57f96db7-dh6zp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 kubectl -- exec busybox-7b57f96db7-kr7g2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 kubectl -- exec busybox-7b57f96db7-58nkn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 kubectl -- exec busybox-7b57f96db7-dh6zp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 kubectl -- exec busybox-7b57f96db7-kr7g2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 kubectl -- exec busybox-7b57f96db7-58nkn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 kubectl -- exec busybox-7b57f96db7-dh6zp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 kubectl -- exec busybox-7b57f96db7-kr7g2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (10.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (2.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 kubectl -- exec busybox-7b57f96db7-58nkn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 kubectl -- exec busybox-7b57f96db7-58nkn -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 kubectl -- exec busybox-7b57f96db7-dh6zp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 kubectl -- exec busybox-7b57f96db7-dh6zp -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 kubectl -- exec busybox-7b57f96db7-kr7g2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 kubectl -- exec busybox-7b57f96db7-kr7g2 -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (2.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (52.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 node add --alsologtostderr -v 5
E1210 06:29:18.906226   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:29:29.003611   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe -p ha-762100 node add --alsologtostderr -v 5: (50.8311424s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-762100 status --alsologtostderr -v 5: (1.9303918s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (52.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-762100 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.9602628s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (33.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-windows-amd64.exe -p ha-762100 status --output json --alsologtostderr -v 5: (1.890719s)
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 cp testdata\cp-test.txt ha-762100:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 cp ha-762100:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile468562856\001\cp-test_ha-762100.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 cp ha-762100:/home/docker/cp-test.txt ha-762100-m02:/home/docker/cp-test_ha-762100_ha-762100-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m02 "sudo cat /home/docker/cp-test_ha-762100_ha-762100-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 cp ha-762100:/home/docker/cp-test.txt ha-762100-m03:/home/docker/cp-test_ha-762100_ha-762100-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100 "sudo cat /home/docker/cp-test.txt"
E1210 06:29:45.919918   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m03 "sudo cat /home/docker/cp-test_ha-762100_ha-762100-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 cp ha-762100:/home/docker/cp-test.txt ha-762100-m04:/home/docker/cp-test_ha-762100_ha-762100-m04.txt
E1210 06:29:46.616013   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m04 "sudo cat /home/docker/cp-test_ha-762100_ha-762100-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 cp testdata\cp-test.txt ha-762100-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 cp ha-762100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile468562856\001\cp-test_ha-762100-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 cp ha-762100-m02:/home/docker/cp-test.txt ha-762100:/home/docker/cp-test_ha-762100-m02_ha-762100.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100 "sudo cat /home/docker/cp-test_ha-762100-m02_ha-762100.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 cp ha-762100-m02:/home/docker/cp-test.txt ha-762100-m03:/home/docker/cp-test_ha-762100-m02_ha-762100-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m03 "sudo cat /home/docker/cp-test_ha-762100-m02_ha-762100-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 cp ha-762100-m02:/home/docker/cp-test.txt ha-762100-m04:/home/docker/cp-test_ha-762100-m02_ha-762100-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m04 "sudo cat /home/docker/cp-test_ha-762100-m02_ha-762100-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 cp testdata\cp-test.txt ha-762100-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 cp ha-762100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile468562856\001\cp-test_ha-762100-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 cp ha-762100-m03:/home/docker/cp-test.txt ha-762100:/home/docker/cp-test_ha-762100-m03_ha-762100.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100 "sudo cat /home/docker/cp-test_ha-762100-m03_ha-762100.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 cp ha-762100-m03:/home/docker/cp-test.txt ha-762100-m02:/home/docker/cp-test_ha-762100-m03_ha-762100-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m02 "sudo cat /home/docker/cp-test_ha-762100-m03_ha-762100-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 cp ha-762100-m03:/home/docker/cp-test.txt ha-762100-m04:/home/docker/cp-test_ha-762100-m03_ha-762100-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m04 "sudo cat /home/docker/cp-test_ha-762100-m03_ha-762100-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 cp testdata\cp-test.txt ha-762100-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 cp ha-762100-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile468562856\001\cp-test_ha-762100-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 cp ha-762100-m04:/home/docker/cp-test.txt ha-762100:/home/docker/cp-test_ha-762100-m04_ha-762100.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100 "sudo cat /home/docker/cp-test_ha-762100-m04_ha-762100.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 cp ha-762100-m04:/home/docker/cp-test.txt ha-762100-m02:/home/docker/cp-test_ha-762100-m04_ha-762100-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m02 "sudo cat /home/docker/cp-test_ha-762100-m04_ha-762100-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 cp ha-762100-m04:/home/docker/cp-test.txt ha-762100-m03:/home/docker/cp-test_ha-762100-m04_ha-762100-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 ssh -n ha-762100-m03 "sudo cat /home/docker/cp-test_ha-762100-m04_ha-762100-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (33.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p ha-762100 node stop m02 --alsologtostderr -v 5: (11.7896127s)
ha_test.go:371: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-762100 status --alsologtostderr -v 5: exit status 7 (1.5876538s)

                                                
                                                
-- stdout --
	ha-762100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-762100-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-762100-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-762100-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:30:24.374151   11916 out.go:360] Setting OutFile to fd 1232 ...
	I1210 06:30:24.417146   11916 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:30:24.417146   11916 out.go:374] Setting ErrFile to fd 1020...
	I1210 06:30:24.417146   11916 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:30:24.428931   11916 out.go:368] Setting JSON to false
	I1210 06:30:24.429014   11916 mustload.go:66] Loading cluster: ha-762100
	I1210 06:30:24.429014   11916 notify.go:221] Checking for updates...
	I1210 06:30:24.429270   11916 config.go:182] Loaded profile config "ha-762100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 06:30:24.429270   11916 status.go:174] checking status of ha-762100 ...
	I1210 06:30:24.437804   11916 cli_runner.go:164] Run: docker container inspect ha-762100 --format={{.State.Status}}
	I1210 06:30:24.495424   11916 status.go:371] ha-762100 host status = "Running" (err=<nil>)
	I1210 06:30:24.495424   11916 host.go:66] Checking if "ha-762100" exists ...
	I1210 06:30:24.499585   11916 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-762100
	I1210 06:30:24.557809   11916 host.go:66] Checking if "ha-762100" exists ...
	I1210 06:30:24.562488   11916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:30:24.565961   11916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-762100
	I1210 06:30:24.619317   11916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51719 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-762100\id_rsa Username:docker}
	I1210 06:30:24.810973   11916 ssh_runner.go:195] Run: systemctl --version
	I1210 06:30:24.827718   11916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:30:24.849534   11916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-762100
	I1210 06:30:24.905615   11916 kubeconfig.go:125] found "ha-762100" server: "https://127.0.0.1:51723"
	I1210 06:30:24.905615   11916 api_server.go:166] Checking apiserver status ...
	I1210 06:30:24.909992   11916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:30:24.936971   11916 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2999/cgroup
	I1210 06:30:24.950403   11916 api_server.go:182] apiserver freezer: "7:freezer:/docker/7fbd2fdd883a0221f67b043956e7ef7b0a5c6903873034ec178d0411c7b62acf/kubepods/burstable/podf9ade178a10e3be521ac8f0dd2cfbcda/f3dc73e19c604997ff5b1de1ef6fffcb428be10a936d77795a8647e681baac16"
	I1210 06:30:24.954543   11916 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7fbd2fdd883a0221f67b043956e7ef7b0a5c6903873034ec178d0411c7b62acf/kubepods/burstable/podf9ade178a10e3be521ac8f0dd2cfbcda/f3dc73e19c604997ff5b1de1ef6fffcb428be10a936d77795a8647e681baac16/freezer.state
	I1210 06:30:24.968441   11916 api_server.go:204] freezer state: "THAWED"
	I1210 06:30:24.968441   11916 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51723/healthz ...
	I1210 06:30:24.978700   11916 api_server.go:279] https://127.0.0.1:51723/healthz returned 200:
	ok
	I1210 06:30:24.978770   11916 status.go:463] ha-762100 apiserver status = Running (err=<nil>)
	I1210 06:30:24.978770   11916 status.go:176] ha-762100 status: &{Name:ha-762100 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:30:24.978796   11916 status.go:174] checking status of ha-762100-m02 ...
	I1210 06:30:24.985751   11916 cli_runner.go:164] Run: docker container inspect ha-762100-m02 --format={{.State.Status}}
	I1210 06:30:25.043157   11916 status.go:371] ha-762100-m02 host status = "Stopped" (err=<nil>)
	I1210 06:30:25.043913   11916 status.go:384] host is not running, skipping remaining checks
	I1210 06:30:25.043913   11916 status.go:176] ha-762100-m02 status: &{Name:ha-762100-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:30:25.043953   11916 status.go:174] checking status of ha-762100-m03 ...
	I1210 06:30:25.051671   11916 cli_runner.go:164] Run: docker container inspect ha-762100-m03 --format={{.State.Status}}
	I1210 06:30:25.109193   11916 status.go:371] ha-762100-m03 host status = "Running" (err=<nil>)
	I1210 06:30:25.109193   11916 host.go:66] Checking if "ha-762100-m03" exists ...
	I1210 06:30:25.114336   11916 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-762100-m03
	I1210 06:30:25.171203   11916 host.go:66] Checking if "ha-762100-m03" exists ...
	I1210 06:30:25.177085   11916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:30:25.179580   11916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-762100-m03
	I1210 06:30:25.241228   11916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52016 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-762100-m03\id_rsa Username:docker}
	I1210 06:30:25.373232   11916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:30:25.397491   11916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-762100
	I1210 06:30:25.450302   11916 kubeconfig.go:125] found "ha-762100" server: "https://127.0.0.1:51723"
	I1210 06:30:25.450302   11916 api_server.go:166] Checking apiserver status ...
	I1210 06:30:25.456434   11916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:30:25.481850   11916 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/3020/cgroup
	I1210 06:30:25.497418   11916 api_server.go:182] apiserver freezer: "7:freezer:/docker/432431b5c9b44b264af3eb00650c1b099a005ad438c1bd2a2daebc5ee1e0d8ea/kubepods/burstable/pod345472272184d6d099ba701e7b36e631/be6ca13549c4f83fbaac9702a6ec4774845f0bc82e68afa85585defcc45927e8"
	I1210 06:30:25.503679   11916 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/432431b5c9b44b264af3eb00650c1b099a005ad438c1bd2a2daebc5ee1e0d8ea/kubepods/burstable/pod345472272184d6d099ba701e7b36e631/be6ca13549c4f83fbaac9702a6ec4774845f0bc82e68afa85585defcc45927e8/freezer.state
	I1210 06:30:25.515104   11916 api_server.go:204] freezer state: "THAWED"
	I1210 06:30:25.515104   11916 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:51723/healthz ...
	I1210 06:30:25.521811   11916 api_server.go:279] https://127.0.0.1:51723/healthz returned 200:
	ok
	I1210 06:30:25.522807   11916 status.go:463] ha-762100-m03 apiserver status = Running (err=<nil>)
	I1210 06:30:25.522807   11916 status.go:176] ha-762100-m03 status: &{Name:ha-762100-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:30:25.522807   11916 status.go:174] checking status of ha-762100-m04 ...
	I1210 06:30:25.528810   11916 cli_runner.go:164] Run: docker container inspect ha-762100-m04 --format={{.State.Status}}
	I1210 06:30:25.583930   11916 status.go:371] ha-762100-m04 host status = "Running" (err=<nil>)
	I1210 06:30:25.583930   11916 host.go:66] Checking if "ha-762100-m04" exists ...
	I1210 06:30:25.587836   11916 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-762100-m04
	I1210 06:30:25.645402   11916 host.go:66] Checking if "ha-762100-m04" exists ...
	I1210 06:30:25.650553   11916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:30:25.653705   11916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-762100-m04
	I1210 06:30:25.709854   11916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52312 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-762100-m04\id_rsa Username:docker}
	I1210 06:30:25.841952   11916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:30:25.862347   11916 status.go:176] ha-762100-m04 status: &{Name:ha-762100-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5639658s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (104.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p ha-762100 node start m02 --alsologtostderr -v 5: (1m42.6943723s)
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-windows-amd64.exe -p ha-762100 status --alsologtostderr -v 5: (1.9255646s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (104.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.9802926s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (179.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-windows-amd64.exe -p ha-762100 stop --alsologtostderr -v 5: (37.3316888s)
ha_test.go:469: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 start --wait true --alsologtostderr -v 5
E1210 06:33:02.292509   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:34:18.910954   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:34:45.924540   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-windows-amd64.exe -p ha-762100 start --wait true --alsologtostderr -v 5: (2m22.0657658s)
ha_test.go:474: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (179.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (14.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-windows-amd64.exe -p ha-762100 node delete m03 --alsologtostderr -v 5: (12.7980965s)
ha_test.go:495: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Done: out/minikube-windows-amd64.exe -p ha-762100 status --alsologtostderr -v 5: (1.4535067s)
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (14.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.4773538s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (37.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p ha-762100 stop --alsologtostderr -v 5: (36.981873s)
ha_test.go:539: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-762100 status --alsologtostderr -v 5: exit status 7 (336.1139ms)

                                                
                                                
-- stdout --
	ha-762100
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-762100-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-762100-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:36:07.096395    2364 out.go:360] Setting OutFile to fd 1992 ...
	I1210 06:36:07.137396    2364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:36:07.137396    2364 out.go:374] Setting ErrFile to fd 1168...
	I1210 06:36:07.137396    2364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:36:07.148608    2364 out.go:368] Setting JSON to false
	I1210 06:36:07.148608    2364 mustload.go:66] Loading cluster: ha-762100
	I1210 06:36:07.148608    2364 notify.go:221] Checking for updates...
	I1210 06:36:07.149615    2364 config.go:182] Loaded profile config "ha-762100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 06:36:07.149615    2364 status.go:174] checking status of ha-762100 ...
	I1210 06:36:07.156406    2364 cli_runner.go:164] Run: docker container inspect ha-762100 --format={{.State.Status}}
	I1210 06:36:07.212082    2364 status.go:371] ha-762100 host status = "Stopped" (err=<nil>)
	I1210 06:36:07.212082    2364 status.go:384] host is not running, skipping remaining checks
	I1210 06:36:07.212082    2364 status.go:176] ha-762100 status: &{Name:ha-762100 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:36:07.212082    2364 status.go:174] checking status of ha-762100-m02 ...
	I1210 06:36:07.219606    2364 cli_runner.go:164] Run: docker container inspect ha-762100-m02 --format={{.State.Status}}
	I1210 06:36:07.271490    2364 status.go:371] ha-762100-m02 host status = "Stopped" (err=<nil>)
	I1210 06:36:07.271490    2364 status.go:384] host is not running, skipping remaining checks
	I1210 06:36:07.271490    2364 status.go:176] ha-762100-m02 status: &{Name:ha-762100-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:36:07.271490    2364 status.go:174] checking status of ha-762100-m04 ...
	I1210 06:36:07.277492    2364 cli_runner.go:164] Run: docker container inspect ha-762100-m04 --format={{.State.Status}}
	I1210 06:36:07.335645    2364 status.go:371] ha-762100-m04 host status = "Stopped" (err=<nil>)
	I1210 06:36:07.335645    2364 status.go:384] host is not running, skipping remaining checks
	I1210 06:36:07.335645    2364 status.go:176] ha-762100-m04 status: &{Name:ha-762100-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (37.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (111.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 start --wait true --alsologtostderr -v 5 --driver=docker
ha_test.go:562: (dbg) Done: out/minikube-windows-amd64.exe -p ha-762100 start --wait true --alsologtostderr -v 5 --driver=docker: (1m49.8973683s)
ha_test.go:568: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 status --alsologtostderr -v 5
ha_test.go:568: (dbg) Done: out/minikube-windows-amd64.exe -p ha-762100 status --alsologtostderr -v 5: (1.4768277s)
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (111.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5300345s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (107.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 node add --control-plane --alsologtostderr -v 5
E1210 06:38:02.323834   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:39:18.914566   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:39:45.929102   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-windows-amd64.exe -p ha-762100 node add --control-plane --alsologtostderr -v 5: (1m45.6448067s)
ha_test.go:613: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-762100 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-windows-amd64.exe -p ha-762100 status --alsologtostderr -v 5: (1.9176471s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (107.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.9879865s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.99s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (60.86s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-881900 --driver=docker
E1210 06:40:41.987061   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-881900 --driver=docker: (1m0.8625063s)
--- PASS: TestImageBuild/serial/Setup (60.86s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (3.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-881900
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-881900: (3.773021s)
--- PASS: TestImageBuild/serial/NormalBuild (3.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.53s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-881900
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-881900: (2.529613s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.53s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.24s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-881900
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-881900: (1.2405285s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.24s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.25s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-881900
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-881900: (1.2522222s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.25s)

                                                
                                    
x
+
TestJSONOutput/start/Command (91.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-156200 --output=json --user=testUser --memory=3072 --wait=true --driver=docker
E1210 06:42:45.372192   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-156200 --output=json --user=testUser --memory=3072 --wait=true --driver=docker: (1m31.9043282s)
--- PASS: TestJSONOutput/start/Command (91.90s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.19s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-156200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-156200 --output=json --user=testUser: (1.191129s)
--- PASS: TestJSONOutput/pause/Command (1.19s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.94s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-156200 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.94s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.18s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-156200 --output=json --user=testUser
E1210 06:43:02.301781   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-156200 --output=json --user=testUser: (12.179641s)
--- PASS: TestJSONOutput/stop/Command (12.18s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.68s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-973500 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-973500 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (197.7362ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0294436d-6baa-4e58-a9c9-d85bda01333f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-973500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c382fbf6-f265-4758-a446-9b555d24c4ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"6dc0d439-a8a1-4d0f-9979-e4a40b194173","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9016f417-f48c-4047-8e85-ac15b24fa3f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"306d9987-7a9d-4780-95b0-e267441a58da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22094"}}
	{"specversion":"1.0","id":"051982cd-65bc-45df-8ec5-373c4f159b40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9b28fa0e-859a-42df-9bbf-996dc7c664a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-973500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-973500
--- PASS: TestErrorJSONOutput (0.68s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (66.68s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-955100 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-955100 --network=: (1m3.003652s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-955100" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-955100
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-955100: (3.6131852s)
--- PASS: TestKicCustomNetwork/create_custom_network (66.68s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (65.59s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-319500 --network=bridge
E1210 06:44:18.919824   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:44:45.932837   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-319500 --network=bridge: (1m2.3188314s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-319500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-319500
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-319500: (3.2111719s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (65.59s)

                                                
                                    
x
+
TestKicExistingNetwork (68.54s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1210 06:45:19.732220   11304 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1210 06:45:19.786679   11304 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1210 06:45:19.793666   11304 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1210 06:45:19.794670   11304 cli_runner.go:164] Run: docker network inspect existing-network
W1210 06:45:19.852297   11304 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1210 06:45:19.852297   11304 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1210 06:45:19.852297   11304 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1210 06:45:19.857117   11304 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1210 06:45:19.931549   11304 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0019172c0}
I1210 06:45:19.931549   11304 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1210 06:45:19.936196   11304 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
W1210 06:45:20.001448   11304 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network returned with exit code 1
W1210 06:45:20.001448   11304 network_create.go:149] failed to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
W1210 06:45:20.001448   11304 network_create.go:116] failed to create docker network existing-network 192.168.49.0/24, will retry: subnet is taken
I1210 06:45:20.023110   11304 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1210 06:45:20.038831   11304 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a917a0}
I1210 06:45:20.038831   11304 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1210 06:45:20.043352   11304 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1210 06:45:20.197498   11304 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-527400 --network=existing-network
E1210 06:46:09.020365   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-527400 --network=existing-network: (1m4.6860222s)
helpers_test.go:176: Cleaning up "existing-network-527400" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-527400
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-527400: (3.2491599s)
I1210 06:46:28.214132   11304 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (68.54s)

                                                
                                    
x
+
TestKicCustomSubnet (69.55s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-513600 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-513600 --subnet=192.168.60.0/24: (1m5.9957734s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-513600 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-513600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-513600
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-513600: (3.4945926s)
--- PASS: TestKicCustomSubnet (69.55s)

                                                
                                    
x
+
TestKicStaticIP (68.81s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-039100 --static-ip=192.168.200.200
E1210 06:48:02.306052   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-039100 --static-ip=192.168.200.200: (1m4.9390201s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-039100 ip
helpers_test.go:176: Cleaning up "static-ip-039100" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-039100
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-039100: (3.5476647s)
--- PASS: TestKicStaticIP (68.81s)

                                                
                                    
x
+
TestMainNoArgs (0.16s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.16s)

                                                
                                    
x
+
TestMinikubeProfile (129.93s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-109300 --driver=docker
E1210 06:49:18.923827   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:49:45.937564   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-109300 --driver=docker: (1m0.0584253s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-109300 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-109300 --driver=docker: (59.6829903s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-109300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.2108409s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-109300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.1977475s)
helpers_test.go:176: Cleaning up "second-109300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-109300
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-109300: (3.6587374s)
helpers_test.go:176: Cleaning up "first-109300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-109300
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-109300: (3.6765762s)
--- PASS: TestMinikubeProfile (129.93s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (14.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-540100 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial1000748683\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-540100 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial1000748683\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (13.1425329s)
--- PASS: TestMountStart/serial/StartWithMountFirst (14.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.55s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-540100 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.55s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (13.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-540100 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial1000748683\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-540100 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial1000748683\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (12.8271325s)
--- PASS: TestMountStart/serial/StartWithMountSecond (13.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.54s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-540100 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.54s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.45s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-540100 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-540100 --alsologtostderr -v=5: (2.4524088s)
--- PASS: TestMountStart/serial/DeleteFirst (2.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.54s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-540100 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.54s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.87s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-540100
mount_start_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-540100: (1.8658243s)
--- PASS: TestMountStart/serial/Stop (1.87s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (10.79s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-540100
mount_start_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-540100: (9.7883875s)
--- PASS: TestMountStart/serial/RestartStopped (10.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.55s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-540100 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (139.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-948900 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker
E1210 06:53:02.309790   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-948900 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker: (2m18.8221357s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (139.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-948900 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-948900 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-948900 -- rollout status deployment/busybox: (3.55665s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-948900 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-948900 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-948900 -- exec busybox-7b57f96db7-r7qsj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-948900 -- exec busybox-7b57f96db7-thh6s -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-948900 -- exec busybox-7b57f96db7-r7qsj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-948900 -- exec busybox-7b57f96db7-thh6s -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-948900 -- exec busybox-7b57f96db7-r7qsj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-948900 -- exec busybox-7b57f96db7-thh6s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-948900 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-948900 -- exec busybox-7b57f96db7-r7qsj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-948900 -- exec busybox-7b57f96db7-r7qsj -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-948900 -- exec busybox-7b57f96db7-thh6s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-948900 -- exec busybox-7b57f96db7-thh6s -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-948900 -v=5 --alsologtostderr
E1210 06:54:18.927772   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:54:45.942567   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-948900 -v=5 --alsologtostderr: (50.3168091s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-948900 status --alsologtostderr: (1.3135233s)
--- PASS: TestMultiNode/serial/AddNode (51.63s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-948900 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.385033s)
--- PASS: TestMultiNode/serial/ProfileList (1.39s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (19.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-948900 status --output json --alsologtostderr: (1.3012209s)
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 cp testdata\cp-test.txt multinode-948900:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 ssh -n multinode-948900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 cp multinode-948900:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile3971923934\001\cp-test_multinode-948900.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 ssh -n multinode-948900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 cp multinode-948900:/home/docker/cp-test.txt multinode-948900-m02:/home/docker/cp-test_multinode-948900_multinode-948900-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 ssh -n multinode-948900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 ssh -n multinode-948900-m02 "sudo cat /home/docker/cp-test_multinode-948900_multinode-948900-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 cp multinode-948900:/home/docker/cp-test.txt multinode-948900-m03:/home/docker/cp-test_multinode-948900_multinode-948900-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 ssh -n multinode-948900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 ssh -n multinode-948900-m03 "sudo cat /home/docker/cp-test_multinode-948900_multinode-948900-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 cp testdata\cp-test.txt multinode-948900-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 ssh -n multinode-948900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 cp multinode-948900-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile3971923934\001\cp-test_multinode-948900-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 ssh -n multinode-948900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 cp multinode-948900-m02:/home/docker/cp-test.txt multinode-948900:/home/docker/cp-test_multinode-948900-m02_multinode-948900.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 ssh -n multinode-948900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 ssh -n multinode-948900 "sudo cat /home/docker/cp-test_multinode-948900-m02_multinode-948900.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 cp multinode-948900-m02:/home/docker/cp-test.txt multinode-948900-m03:/home/docker/cp-test_multinode-948900-m02_multinode-948900-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 ssh -n multinode-948900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 ssh -n multinode-948900-m03 "sudo cat /home/docker/cp-test_multinode-948900-m02_multinode-948900-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 cp testdata\cp-test.txt multinode-948900-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 ssh -n multinode-948900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 cp multinode-948900-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile3971923934\001\cp-test_multinode-948900-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 ssh -n multinode-948900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 cp multinode-948900-m03:/home/docker/cp-test.txt multinode-948900:/home/docker/cp-test_multinode-948900-m03_multinode-948900.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 ssh -n multinode-948900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 ssh -n multinode-948900 "sudo cat /home/docker/cp-test_multinode-948900-m03_multinode-948900.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 cp multinode-948900-m03:/home/docker/cp-test.txt multinode-948900-m02:/home/docker/cp-test_multinode-948900-m03_multinode-948900-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 ssh -n multinode-948900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 ssh -n multinode-948900-m02 "sudo cat /home/docker/cp-test_multinode-948900-m03_multinode-948900-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (19.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-948900 node stop m03: (1.776325s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-948900 status: exit status 7 (1.0119102s)

                                                
                                                
-- stdout --
	multinode-948900
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-948900-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-948900-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-948900 status --alsologtostderr: exit status 7 (1.058573s)

                                                
                                                
-- stdout --
	multinode-948900
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-948900-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-948900-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:55:29.460397    8220 out.go:360] Setting OutFile to fd 1192 ...
	I1210 06:55:29.511203    8220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:55:29.511203    8220 out.go:374] Setting ErrFile to fd 1248...
	I1210 06:55:29.511203    8220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:55:29.522307    8220 out.go:368] Setting JSON to false
	I1210 06:55:29.522370    8220 mustload.go:66] Loading cluster: multinode-948900
	I1210 06:55:29.522440    8220 notify.go:221] Checking for updates...
	I1210 06:55:29.522947    8220 config.go:182] Loaded profile config "multinode-948900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 06:55:29.523005    8220 status.go:174] checking status of multinode-948900 ...
	I1210 06:55:29.529931    8220 cli_runner.go:164] Run: docker container inspect multinode-948900 --format={{.State.Status}}
	I1210 06:55:29.585343    8220 status.go:371] multinode-948900 host status = "Running" (err=<nil>)
	I1210 06:55:29.585343    8220 host.go:66] Checking if "multinode-948900" exists ...
	I1210 06:55:29.589737    8220 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-948900
	I1210 06:55:29.644825    8220 host.go:66] Checking if "multinode-948900" exists ...
	I1210 06:55:29.651576    8220 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:55:29.655546    8220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948900
	I1210 06:55:29.711512    8220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53786 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-948900\id_rsa Username:docker}
	I1210 06:55:29.844717    8220 ssh_runner.go:195] Run: systemctl --version
	I1210 06:55:29.862496    8220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:55:29.889917    8220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-948900
	I1210 06:55:29.945647    8220 kubeconfig.go:125] found "multinode-948900" server: "https://127.0.0.1:53785"
	I1210 06:55:29.945647    8220 api_server.go:166] Checking apiserver status ...
	I1210 06:55:29.949838    8220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 06:55:29.976102    8220 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2861/cgroup
	I1210 06:55:29.991771    8220 api_server.go:182] apiserver freezer: "7:freezer:/docker/3982f163b69145451461eeef68a03d0c46c7a2b540916af663d0e7cdd4589239/kubepods/burstable/pod477e97bab601ae952c8b10109fb362b9/04d50a7c320a46e559842370bda65ff62ac1e880dce9159bc489296a7fad4bcd"
	I1210 06:55:29.996422    8220 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3982f163b69145451461eeef68a03d0c46c7a2b540916af663d0e7cdd4589239/kubepods/burstable/pod477e97bab601ae952c8b10109fb362b9/04d50a7c320a46e559842370bda65ff62ac1e880dce9159bc489296a7fad4bcd/freezer.state
	I1210 06:55:30.010773    8220 api_server.go:204] freezer state: "THAWED"
	I1210 06:55:30.010773    8220 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:53785/healthz ...
	I1210 06:55:30.023866    8220 api_server.go:279] https://127.0.0.1:53785/healthz returned 200:
	ok
	I1210 06:55:30.023866    8220 status.go:463] multinode-948900 apiserver status = Running (err=<nil>)
	I1210 06:55:30.023866    8220 status.go:176] multinode-948900 status: &{Name:multinode-948900 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:55:30.023866    8220 status.go:174] checking status of multinode-948900-m02 ...
	I1210 06:55:30.032069    8220 cli_runner.go:164] Run: docker container inspect multinode-948900-m02 --format={{.State.Status}}
	I1210 06:55:30.085095    8220 status.go:371] multinode-948900-m02 host status = "Running" (err=<nil>)
	I1210 06:55:30.085095    8220 host.go:66] Checking if "multinode-948900-m02" exists ...
	I1210 06:55:30.089737    8220 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-948900-m02
	I1210 06:55:30.142880    8220 host.go:66] Checking if "multinode-948900-m02" exists ...
	I1210 06:55:30.147401    8220 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 06:55:30.151102    8220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-948900-m02
	I1210 06:55:30.204546    8220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53850 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-948900-m02\id_rsa Username:docker}
	I1210 06:55:30.339501    8220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 06:55:30.360348    8220 status.go:176] multinode-948900-m02 status: &{Name:multinode-948900-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:55:30.360348    8220 status.go:174] checking status of multinode-948900-m03 ...
	I1210 06:55:30.367540    8220 cli_runner.go:164] Run: docker container inspect multinode-948900-m03 --format={{.State.Status}}
	I1210 06:55:30.422002    8220 status.go:371] multinode-948900-m03 host status = "Stopped" (err=<nil>)
	I1210 06:55:30.422179    8220 status.go:384] host is not running, skipping remaining checks
	I1210 06:55:30.422218    8220 status.go:176] multinode-948900-m03 status: &{Name:multinode-948900-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.85s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (14.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-948900 node start m03 -v=5 --alsologtostderr: (13.5574009s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 status -v=5 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-948900 status -v=5 --alsologtostderr: (1.3076071s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (14.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (83.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-948900
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-948900
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-948900: (24.8928631s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-948900 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-948900 --wait=true -v=5 --alsologtostderr: (58.4857193s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-948900
--- PASS: TestMultiNode/serial/RestartKeepsNodes (83.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (7.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-948900 node delete m03: (6.183829s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (7.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 stop
E1210 06:57:22.004057   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-948900 stop: (23.5547137s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-948900 status: exit status 7 (282.5083ms)

                                                
                                                
-- stdout --
	multinode-948900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-948900-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-948900 status --alsologtostderr: exit status 7 (273.6726ms)

                                                
                                                
-- stdout --
	multinode-948900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-948900-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 06:57:40.473344    7020 out.go:360] Setting OutFile to fd 1232 ...
	I1210 06:57:40.518434    7020 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:57:40.518434    7020 out.go:374] Setting ErrFile to fd 232...
	I1210 06:57:40.518434    7020 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 06:57:40.528490    7020 out.go:368] Setting JSON to false
	I1210 06:57:40.528490    7020 mustload.go:66] Loading cluster: multinode-948900
	I1210 06:57:40.528490    7020 notify.go:221] Checking for updates...
	I1210 06:57:40.529374    7020 config.go:182] Loaded profile config "multinode-948900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
	I1210 06:57:40.529374    7020 status.go:174] checking status of multinode-948900 ...
	I1210 06:57:40.535884    7020 cli_runner.go:164] Run: docker container inspect multinode-948900 --format={{.State.Status}}
	I1210 06:57:40.591910    7020 status.go:371] multinode-948900 host status = "Stopped" (err=<nil>)
	I1210 06:57:40.591910    7020 status.go:384] host is not running, skipping remaining checks
	I1210 06:57:40.591910    7020 status.go:176] multinode-948900 status: &{Name:multinode-948900 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 06:57:40.591910    7020 status.go:174] checking status of multinode-948900-m02 ...
	I1210 06:57:40.598580    7020 cli_runner.go:164] Run: docker container inspect multinode-948900-m02 --format={{.State.Status}}
	I1210 06:57:40.655301    7020 status.go:371] multinode-948900-m02 host status = "Stopped" (err=<nil>)
	I1210 06:57:40.655301    7020 status.go:384] host is not running, skipping remaining checks
	I1210 06:57:40.655301    7020 status.go:176] multinode-948900-m02 status: &{Name:multinode-948900-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (60.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-948900 --wait=true -v=5 --alsologtostderr --driver=docker
E1210 06:58:02.314454   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-948900 --wait=true -v=5 --alsologtostderr --driver=docker: (58.8297916s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-948900 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (60.23s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (62.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-948900
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-948900-m02 --driver=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-948900-m02 --driver=docker: exit status 14 (212.9446ms)

                                                
                                                
-- stdout --
	* [multinode-948900-m02] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-948900-m02' is duplicated with machine name 'multinode-948900-m02' in profile 'multinode-948900'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-948900-m03 --driver=docker
E1210 06:59:18.932126   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:59:25.389116   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-948900-m03 --driver=docker: (58.2292224s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-948900
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-948900: exit status 80 (639.2099ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-948900 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-948900-m03 already exists in multinode-948900-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_node_6ccce2fc44e3bb58d6c4f91e09ae7c7eaaf65535_12.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-948900-m03
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-948900-m03: (3.5592086s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (62.79s)

                                                
                                    
x
+
TestPreload (134.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-648600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker
preload_test.go:41: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-648600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker: (1m10.460943s)
preload_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-648600 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-648600 image pull gcr.io/k8s-minikube/busybox: (2.1310141s)
preload_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-648600
preload_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-648600: (12.0626359s)
preload_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-648600 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker
preload_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-648600 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker: (46.1925892s)
preload_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-648600 image list
helpers_test.go:176: Cleaning up "test-preload-648600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-648600
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-648600: (3.6232386s)
--- PASS: TestPreload (134.96s)

                                                
                                    
x
+
TestScheduledStopWindows (125.41s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-094800 --memory=3072 --driver=docker
E1210 07:02:49.037641   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:03:02.318783   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-094800 --memory=3072 --driver=docker: (58.89385s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-094800 --schedule 5m
minikube stop output:

                                                
                                                
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-094800 -n scheduled-stop-094800
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-094800 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-094800 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-094800 --schedule 5s: (1.1764797s)
minikube stop output:

                                                
                                                
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-094800
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-094800: exit status 7 (221.5031ms)

                                                
                                                
-- stdout --
	scheduled-stop-094800
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-094800 -n scheduled-stop-094800
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-094800 -n scheduled-stop-094800: exit status 7 (213.2394ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-094800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-094800
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-094800: (2.5308719s)
--- PASS: TestScheduledStopWindows (125.41s)

                                                
                                    
x
+
TestInsufficientStorage (15.73s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-316900 --memory=3072 --output=json --wait=true --driver=docker
E1210 07:04:18.936943   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-316900 --memory=3072 --output=json --wait=true --driver=docker: exit status 26 (12.1219348s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f8918334-b53f-4335-8793-8eb79faf560a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-316900] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4066289a-84b0-456c-a8b5-d0f2d8952e31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"06b55224-f0f9-4edb-980b-a1483f85869b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"66a2bb37-4806-4086-ae1b-b2d5f9e8e8b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"41c10f16-e53c-468c-8c57-daebdde1b313","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22094"}}
	{"specversion":"1.0","id":"c4a43d89-bea3-4232-8c1e-aec46bce9961","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"04c46f61-f3f6-47f9-8ae4-1271a488af73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"7ccad28a-8fd4-4faf-b52a-5a2c9b80ed5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"350a075e-ca7c-45bb-badc-578e65caac7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cae1ae2d-0756-4f1d-8681-c03327ed3661","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"af4756b4-c414-4bd8-a5c9-b9ca9088ba7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-316900\" primary control-plane node in \"insufficient-storage-316900\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a39d0e4-97f1-43ea-ae51-2432d0ff099d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765275396-22083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a7fe567e-821d-4637-9c3e-f89535ce5446","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4adc9068-aa61-42eb-8fa0-c68899513cb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-316900 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-316900 --output=json --layout=cluster: exit status 7 (591.5356ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-316900","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-316900","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:04:22.965978   13052 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-316900" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-316900 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-316900 --output=json --layout=cluster: exit status 7 (561.4361ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-316900","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-316900","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:04:23.533148    6728 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-316900" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	E1210 07:04:23.552218    6728 status.go:258] unable to read event log: stat: GetFileAttributesEx C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\insufficient-storage-316900\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-316900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-316900
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-316900: (2.4560268s)
--- PASS: TestInsufficientStorage (15.73s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (380.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.2855868695.exe start -p running-upgrade-001500 --memory=3072 --vm-driver=docker
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.2855868695.exe start -p running-upgrade-001500 --memory=3072 --vm-driver=docker: (1m4.4606473s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-001500 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-001500 --memory=3072 --alsologtostderr -v=1 --driver=docker: (5m10.7122222s)
helpers_test.go:176: Cleaning up "running-upgrade-001500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-001500
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-001500: (4.1405905s)
--- PASS: TestRunningBinaryUpgrade (380.13s)

                                                
                                    
x
+
TestMissingContainerUpgrade (254.17s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3069606534.exe start -p missing-upgrade-513700 --memory=3072 --driver=docker
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3069606534.exe start -p missing-upgrade-513700 --memory=3072 --driver=docker: (2m52.707187s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-513700
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-513700: (2.1824227s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-513700
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-513700 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-513700 --memory=3072 --alsologtostderr -v=1 --driver=docker: (1m14.0761055s)
helpers_test.go:176: Cleaning up "missing-upgrade-513700" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-513700
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-513700: (3.5796393s)
--- PASS: TestMissingContainerUpgrade (254.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-513700 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-513700 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker: exit status 14 (241.1619ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-513700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22094
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (101.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-513700 --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:120: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-513700 --memory=3072 --alsologtostderr -v=5 --driver=docker: (1m41.3183792s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-513700 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (101.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (456.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.415067546.exe start -p stopped-upgrade-513700 --memory=3072 --vm-driver=docker
E1210 07:04:45.951136   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.415067546.exe start -p stopped-upgrade-513700 --memory=3072 --vm-driver=docker: (2m52.0434167s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.415067546.exe -p stopped-upgrade-513700 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.415067546.exe -p stopped-upgrade-513700 stop: (2.5305094s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-513700 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-513700 --memory=3072 --alsologtostderr -v=1 --driver=docker: (4m42.3551842s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (456.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (35.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-513700 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-513700 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker: (31.9993709s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-513700 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-513700 status -o json: exit status 2 (655.7267ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-513700","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-513700
no_kubernetes_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-513700: (2.9498965s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (35.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (24.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-513700 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:161: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-513700 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker: (24.9465432s)
--- PASS: TestNoKubernetes/serial/Start (24.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-513700 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-513700 "sudo systemctl is-active --quiet service kubelet": exit status 1 (534.9252ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (10.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-windows-amd64.exe profile list: (7.313878s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (2.9014897s)
--- PASS: TestNoKubernetes/serial/ProfileList (10.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-513700
no_kubernetes_test.go:183: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-513700: (1.935772s)
--- PASS: TestNoKubernetes/serial/Stop (1.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (10.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-513700 --driver=docker
no_kubernetes_test.go:216: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-513700 --driver=docker: (10.2503118s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (10.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-513700 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-513700 "sudo systemctl is-active --quiet service kubelet": exit status 1 (558.7712ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.56s)

                                                
                                    
x
+
TestPause/serial/Start (90.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-762600 --memory=3072 --install-addons=false --wait=all --driver=docker
E1210 07:09:18.942122   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:09:45.955612   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-762600 --memory=3072 --install-addons=false --wait=all --driver=docker: (1m30.8541686s)
--- PASS: TestPause/serial/Start (90.85s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (47.35s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-762600 --alsologtostderr -v=1 --driver=docker
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-762600 --alsologtostderr -v=1 --driver=docker: (47.3296451s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (47.35s)

                                                
                                    
x
+
TestPause/serial/Pause (1.04s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-762600 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-762600 --alsologtostderr -v=5: (1.0427583s)
--- PASS: TestPause/serial/Pause (1.04s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.62s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-762600 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-762600 --output=json --layout=cluster: exit status 2 (618.134ms)

                                                
                                                
-- stdout --
	{"Name":"pause-762600","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-762600","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.62s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-762600 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.89s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.18s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-762600 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-762600 --alsologtostderr -v=5: (1.1782513s)
--- PASS: TestPause/serial/PauseAgain (1.18s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.74s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-762600 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-762600 --alsologtostderr -v=5: (3.743988s)
--- PASS: TestPause/serial/DeletePaused (3.74s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.67s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.4872152s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-762600
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-762600: exit status 1 (56.9879ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-762600: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-513700
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-513700: (1.5939559s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (79.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-412400 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-412400 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0: (1m19.4440439s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (79.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-757000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.3
E1210 07:16:05.406711   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-757000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.3: (1m22.17701s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (82.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-412400 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [8a0a00bb-7eba-4f0b-8c19-c50cd2bfdaa6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [8a0a00bb-7eba-4f0b-8c19-c50cd2bfdaa6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.0073833s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-412400 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-412400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-412400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.5725682s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-412400 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-412400 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-412400 --alsologtostderr -v=3: (12.3210601s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-412400 -n old-k8s-version-412400
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-412400 -n old-k8s-version-412400: exit status 7 (222.1122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-412400 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (36.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-412400 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-412400 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0: (33.8003061s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-412400 -n old-k8s-version-412400
start_stop_delete_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-412400 -n old-k8s-version-412400: (2.420023s)
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (36.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-757000 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [20498f30-6e4a-48e5-a458-8a53ffc45f21] Pending
helpers_test.go:353: "busybox" [20498f30-6e4a-48e5-a458-8a53ffc45f21] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [20498f30-6e4a-48e5-a458-8a53ffc45f21] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.0067364s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-757000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-757000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-757000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.5069754s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-757000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-757000 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-757000 --alsologtostderr -v=3: (12.2207581s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-757000 -n embed-certs-757000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-757000 -n embed-certs-757000: exit status 7 (210.6859ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-757000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (65.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-757000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-757000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.3: (1m5.105441s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-757000 -n embed-certs-757000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (65.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (22.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-m6hrh" [95376636-93cf-4097-94ea-dbaf37847f86] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-m6hrh" [95376636-93cf-4097-94ea-dbaf37847f86] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 22.006804s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (22.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-m6hrh" [95376636-93cf-4097-94ea-dbaf37847f86] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0070929s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-412400 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-412400 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-412400 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-412400 --alsologtostderr -v=1: (1.2792389s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-412400 -n old-k8s-version-412400
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-412400 -n old-k8s-version-412400: exit status 2 (644.9365ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-412400 -n old-k8s-version-412400
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-412400 -n old-k8s-version-412400: exit status 2 (661.9878ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-412400 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-412400 --alsologtostderr -v=1: (1.0610262s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-412400 -n old-k8s-version-412400
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-412400 -n old-k8s-version-412400
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (5.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (116.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-144100 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.3
E1210 07:18:02.333293   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-144100 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.3: (1m56.2297746s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (116.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-vxfsf" [dcdfc615-14d6-4c42-bc2c-67932ea5c237] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0065838s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-vxfsf" [dcdfc615-14d6-4c42-bc2c-67932ea5c237] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00824s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-757000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-757000 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-757000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-757000 --alsologtostderr -v=1: (1.372151s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-757000 -n embed-certs-757000
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-757000 -n embed-certs-757000: exit status 2 (627.9669ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-757000 -n embed-certs-757000
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-757000 -n embed-certs-757000: exit status 2 (632.1582ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-757000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-757000 --alsologtostderr -v=1: (1.0109139s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-757000 -n embed-certs-757000
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-757000 -n embed-certs-757000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (5.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-144100 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [5503530f-2074-4189-a960-42b4f5113d59] Pending
helpers_test.go:353: "busybox" [5503530f-2074-4189-a960-42b4f5113d59] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [5503530f-2074-4189-a960-42b4f5113d59] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.0102578s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-144100 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-144100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-144100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.289041s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-144100 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-144100 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-144100 --alsologtostderr -v=3: (12.202315s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-144100 -n default-k8s-diff-port-144100
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-144100 -n default-k8s-diff-port-144100: exit status 7 (209.8522ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-144100 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-144100 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-144100 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.3: (52.3491417s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-144100 -n default-k8s-diff-port-144100
E1210 07:21:14.024424   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (99.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
E1210 07:21:11.444373   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:11.451393   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:11.464384   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:11.487391   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:11.530382   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:11.613395   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:11.776377   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:12.098399   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:21:12.741434   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (1m39.6499423s)
--- PASS: TestNetworkPlugins/group/auto/Start (99.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-9dtt7" [76d1402c-00ef-4a37-bd38-be2e26a67743] Running
E1210 07:21:16.586659   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0042093s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-9dtt7" [76d1402c-00ef-4a37-bd38-be2e26a67743] Running
E1210 07:21:21.709502   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0064702s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-144100 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-144100 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-144100 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-144100 --alsologtostderr -v=1: (1.1380195s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-144100 -n default-k8s-diff-port-144100
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-144100 -n default-k8s-diff-port-144100: exit status 2 (657.2707ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-144100 -n default-k8s-diff-port-144100
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-144100 -n default-k8s-diff-port-144100: exit status 2 (651.475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-144100 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-144100 -n default-k8s-diff-port-144100
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-144100 -n default-k8s-diff-port-144100
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (90.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
E1210 07:21:52.434742   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (1m30.5492894s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (90.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-648600 "pgrep -a kubelet"
I1210 07:22:28.154603   11304 config.go:182] Loaded profile config "auto-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-648600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-c2ds7" [906d3588-96c8-4f5d-b4f6-526eeb3d1d20] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1210 07:22:33.398595   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-c2ds7" [906d3588-96c8-4f5d-b4f6-526eeb3d1d20] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.0063172s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-648600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-648600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-648600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-4wnbd" [235ebcd6-58d1-4293-ab07-5a4d3b544f08] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0050741s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-648600 "pgrep -a kubelet"
I1210 07:23:16.983617   11304 config.go:182] Loaded profile config "kindnet-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (17.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-648600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-5slcg" [a5e7e1ce-c5cb-4b4a-b32d-0f7ac8a9c255] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-5slcg" [a5e7e1ce-c5cb-4b4a-b32d-0f7ac8a9c255] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 17.0079634s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (17.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (93.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (1m33.2723362s)
--- PASS: TestNetworkPlugins/group/flannel/Start (93.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-648600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-648600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-648600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (102.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
E1210 07:24:18.955148   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:24:45.970072   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-949500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (1m42.5979361s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (102.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-zq6rr" [c9cec73f-4406-4ce6-ad31-940796b16224] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0063412s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-648600 "pgrep -a kubelet"
I1210 07:24:56.979078   11304 config.go:182] Loaded profile config "flannel-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (17.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-648600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-tprfd" [833a6883-6ecd-4393-9451-3dece9885d61] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1210 07:24:58.152875   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-144100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:24:58.159258   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-144100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:24:58.171471   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-144100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:24:58.193240   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-144100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:24:58.235387   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-144100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:24:58.316821   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-144100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:24:58.479351   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-144100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:24:58.801597   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-144100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:24:59.444330   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-144100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:25:00.726658   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-144100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:25:03.289258   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-144100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:25:08.412197   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-144100\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-tprfd" [833a6883-6ecd-4393-9451-3dece9885d61] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 17.0076186s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (17.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-648600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-648600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-648600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (99.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (1m39.3568692s)
--- PASS: TestNetworkPlugins/group/bridge/Start (99.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-648600 "pgrep -a kubelet"
I1210 07:25:57.958849   11304 config.go:182] Loaded profile config "enable-default-cni-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-648600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context enable-default-cni-648600 replace --force -f testdata\netcat-deployment.yaml: (1.1221457s)
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-2nv5w" [27c6ab4a-c02d-47fb-90ac-fd5eec56132b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-2nv5w" [27c6ab4a-c02d-47fb-90ac-fd5eec56132b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.0186449s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-648600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-648600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-648600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (99.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
E1210 07:27:28.656414   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:27:28.663361   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:27:28.675056   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:27:28.697756   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:27:28.740368   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (1m39.8696735s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (99.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-648600 "pgrep -a kubelet"
E1210 07:27:28.821715   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:27:28.983162   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:27:29.305522   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
I1210 07:27:29.367373   11304 config.go:182] Loaded profile config "bridge-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (19.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-648600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-lldxf" [76d18ed9-39c6-469e-a1c9-f5f1ebddffb1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1210 07:27:29.948342   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:27:31.231080   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:27:33.792827   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-lldxf" [76d18ed9-39c6-469e-a1c9-f5f1ebddffb1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 19.0070657s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (19.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-648600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-648600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1210 07:27:49.157270   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-648600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-099700 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-099700 --alsologtostderr -v=3: (1.9280405s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-099700 -n no-preload-099700
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-099700 -n no-preload-099700: exit status 7 (234.7865ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-099700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (125.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (2m5.7195086s)
--- PASS: TestNetworkPlugins/group/calico/Start (125.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (1.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-648600 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-648600 "pgrep -a kubelet": (1.0262918s)
I1210 07:28:29.073156   11304 config.go:182] Loaded profile config "kubenet-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (1.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (17.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-648600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-rpk4g" [7057b0c9-b90c-422b-b5f8-c64259c1dac0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1210 07:28:30.701484   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-rpk4g" [7057b0c9-b90c-422b-b5f8-c64259c1dac0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 17.0070634s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (17.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-648600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-648600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-648600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (94.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (1m34.092953s)
--- PASS: TestNetworkPlugins/group/false/Start (94.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-525200 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-525200 --alsologtostderr -v=3: (3.5866469s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-525200 -n newest-cni-525200
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-525200 -n newest-cni-525200: exit status 7 (247.6906ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-525200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-45rmv" [36d9fc07-6cad-4f57-89ef-52b1aad112da] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-45rmv" [36d9fc07-6cad-4f57-89ef-52b1aad112da] Running
E1210 07:30:31.372360   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0069652s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-648600 "pgrep -a kubelet"
I1210 07:30:36.861343   11304 config.go:182] Loaded profile config "calico-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-648600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-2x6kn" [ddb96e9f-a2b0-4721-a693-25a885ea8f9e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1210 07:30:42.039380   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-871500\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-2x6kn" [ddb96e9f-a2b0-4721-a693-25a885ea8f9e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.0060793s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-648600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-648600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-648600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-648600 "pgrep -a kubelet"
I1210 07:30:56.588770   11304 config.go:182] Loaded profile config "false-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (15.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-648600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-smsxk" [1a97f07a-f0b2-4252-8606-1f87df5562f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1210 07:30:59.089540   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:30:59.097539   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:30:59.109542   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:30:59.132543   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:30:59.174546   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:30:59.257548   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:30:59.420543   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:30:59.743557   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:31:00.386564   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:31:01.668961   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:31:04.231984   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-smsxk" [1a97f07a-f0b2-4252-8606-1f87df5562f8] Running
E1210 07:31:09.354165   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:31:11.452767   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-412400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 15.0099039s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (15.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-648600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
E1210 07:31:12.335776   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:194: (dbg) Run:  kubectl --context false-648600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-648600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (83.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
E1210 07:31:40.080758   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-648600 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (1m23.2982092s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (83.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-648600 "pgrep -a kubelet"
I1210 07:32:51.200598   11304 config.go:182] Loaded profile config "custom-flannel-648600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-648600 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-cbjgn" [91d4666c-b8a9-4bff-81bd-94cd264e8438] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1210 07:32:56.372889   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-cbjgn" [91d4666c-b8a9-4bff-81bd-94cd264e8438] Running
E1210 07:33:02.346829   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-493600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.0062141s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-648600 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-648600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-648600 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)
E1210 07:33:42.967179   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:33:50.027726   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:33:51.835366   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:34:10.510725   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-525200 image list --format=json
E1210 07:35:50.788940   11304 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-648600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.48s)

                                                
                                    

Test skip (34/427)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
13 TestDownloadOnly/v1.34.3/preload-exists 0.26
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
44 TestAddons/parallel/Registry 21.48
46 TestAddons/parallel/Ingress 26.47
49 TestAddons/parallel/Olm 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
99 TestFunctional/parallel/DashboardCmd 300.03
103 TestFunctional/parallel/MountCmd 0
106 TestFunctional/parallel/ServiceCmdConnect 13.32
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
192 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 0.48
196 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 0
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0
257 TestGvisorAddon 0
286 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
287 TestISOImage 0
354 TestScheduledStopUnix 0
355 TestSkaffold 0
380 TestStartStop/group/disable-driver-mounts 1.36
402 TestNetworkPlugins/group/cilium 11.9
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1210 05:29:17.350834   11304 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
W1210 05:29:17.453774   11304 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
W1210 05:29:17.614839   11304 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.34.3/preload-exists (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (21.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 8.4246ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-pq84f" [03d5358b-7e6c-4b84-b043-b39c7c6fa5a6] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0052083s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-pbfph" [f27c3254-e493-4324-9ec7-b5c23dbd8777] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0065842s
addons_test.go:394: (dbg) Run:  kubectl --context addons-949500 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-949500 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-949500 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.0048932s)
addons_test.go:409: Unable to complete rest of the test due to connectivity assumptions
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-949500 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-949500 addons disable registry --alsologtostderr -v=1: (1.303419s)
--- SKIP: TestAddons/parallel/Registry (21.48s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (26.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-949500 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-949500 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-949500 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [019e2ae5-4d61-4a7b-82b7-262d548a886f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [019e2ae5-4d61-4a7b-82b7-262d548a886f] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.0057368s
I1210 05:36:33.186358   11304 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-949500 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: skipping ingress DNS test for any combination that needs port forwarding
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-949500 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-949500 addons disable ingress-dns --alsologtostderr -v=1: (2.5254816s)
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-949500 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-949500 addons disable ingress --alsologtostderr -v=1: (8.4657512s)
--- SKIP: TestAddons/parallel/Ingress (26.47s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-493600 --alsologtostderr -v=1]
functional_test.go:931: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-493600 --alsologtostderr -v=1] ...
helpers_test.go:520: unable to terminate pid 13728: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-493600 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-493600 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-jfgv7" [6f749d80-7af6-4555-89fb-d6ae61a5924e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-jfgv7" [6f749d80-7af6-4555-89fb-d6ae61a5924e] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.0057918s
functional_test.go:1651: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (13.32s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-871500 --alsologtostderr -v=1]
functional_test.go:931: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-871500 --alsologtostderr -v=1] ...
helpers_test.go:520: unable to terminate pid 7320: Access is denied.
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-768900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-768900
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-768900: (1.3603098s)
--- SKIP: TestStartStop/group/disable-driver-mounts (1.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (11.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-648600 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-648600

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-648600

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-648600

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-648600

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-648600

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-648600

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-648600

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-648600

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-648600

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-648600

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-648600

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-648600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-648600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-648600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-648600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-648600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-648600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-648600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-648600" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-648600

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-648600

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-648600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-648600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-648600

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-648600

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-648600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-648600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-648600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-648600" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-648600" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 07:08:23 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://127.0.0.1:55052
name: kubernetes-upgrade-458400
- cluster:
certificate-authority: C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 10 Dec 2025 07:09:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://127.0.0.1:55085
name: running-upgrade-001500
contexts:
- context:
cluster: kubernetes-upgrade-458400
user: kubernetes-upgrade-458400
name: kubernetes-upgrade-458400
- context:
cluster: running-upgrade-001500
user: running-upgrade-001500
name: running-upgrade-001500
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-458400
user:
client-certificate: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-458400/client.crt
client-key: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-458400/client.key
- name: running-upgrade-001500
user:
client-certificate: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\running-upgrade-001500/client.crt
client-key: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\running-upgrade-001500/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-648600

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-648600" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-648600"

                                                
                                                
----------------------- debugLogs end: cilium-648600 [took: 11.3813557s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-648600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-648600
--- SKIP: TestNetworkPlugins/group/cilium (11.90s)

                                                
                                    
Copied to clipboard